Fuse particles from image textures

Three part question
1] This seems like something that should already be a solved question, but I’m not finding it. What’s the way to convert an image (texture or whatever) into a fuse particle array transfering the position/color (or color as alpha for my use) information.

2] Is there a way to take a stack of images, and do this in a manner similar to the “How to Compute Texture 3D” from the Fuse->Compute help documentation? As in to convert an image series to 3d texture and use that as the basis for particle position and color.

3] In this is it possible to include a threshholding to eliminate pixels below a value threshold to not waste resources on invisible particles.

Thanks for any info in advance!

  1. from HowTo Use Textures as VectorField: