Blocks uses raycasts layers to handle multiple interactions
There are layers involved, but im not 100% sure what you mean by 'multiple interactions'?
Blocks is cloning from prefabs for all the gameobjects
Correct!
The 'Blocks' each have two materials at a given time, or a custom emission mask with animated alpha cut-out (?)
Each object has two materials. The edge material is a basic emissive color, with the faces being an animated alpha cutoff using a texture ramp. The emissive edges is enhanced using a bloom filter, and further enhanced by a point light with a cookie that matches the geometry of the edges. The cookie is what gives the illusion that the object is actually casting light into the scene.
For the 'grab' interactions, the four fingers are doing a spherical raycast , or palm (or both), to detect gameobjects; the radius is nominal
There is a single spherecast done per-hand to detect candidates for a grab action, but the actual logic involved with detecting the grab is a little complicated. We are planning to release a module in the future that makes this easy to integrate and use.
For the 'grab' interactions, there's not actually any physics applied to the gameobject once grabbed. Instead the gameobject is being parented to the hand
Once again there is a little more complication to just parenting (notice when you grab an object and wiggle your fingers, the object moves too), but you have the right idea. The rigidbody of the object is set to be kinematic, and is manipulated directly instead of using forces.
Physics is enabled when the gameobject is no longer parented
Correct!
Colliders are being enabled / disabled dynamically based on the raycast procedure.
There are no raycasts being done in the project. The only time a collider is disabled is for a moment right after it has spawned.
For the left-hand UI panel, Blocks uses a raycast cone, or a cone trigger, from the palm and detects the camera position to activate the UI panel, stays active if hit; also uses palm?; conic eccentricity is nominal
It uses a simple cone trigger to activate the panel. There is a transform that matches the position of the palm, but always looks towards the players head position. The entire gui is anchored to this transform. The 'spring' action of the gui is all code, for ease of tuning and control.
All in all, you are on a great track! Let me know if you have any additional questions, I'm happy to help!