Tags:
create new tag
view all tags

June 11 2011

Tomography Brainstorming Session.

Ideas

  • I'm wondering about the current state of scanning pouring liquids? We often get a qualitative description of the system, but often not dense quantitative descriptions.
  • Has there been much success in quantitative fluid flow descriptions? Can you do PIV in the interior of fluids?

  • A GeorgiaTech Siggraph submission in 2009 put dye in water and used a video projector pattern and stereoscopic captures and combined this with predictions from fluid solvers to get fludi shape. But this is limited because you only can scan what you see.

  • There's a lot of numerical prediction in this area, but a lot of these current methods don't handle estimation robustly.

  • Furthermore, there are no solid benchmark cases and quantitative descriptions used in fluids.

  • One of the applications for computer science for fluid image techniques could be for creating ground truth data sets for fluid simulator. Fluid simulators cannot even specify units, currently. They're considered, "physically plausible", or "looking" but not "actual". A current Siggraph paper by Tyson couldn't answer whether or not they had a real viscosity model. We could try and contribute in this area.

  • Verification was a topic of a recent conference. Verifying computer animations of fluids was a big question. Character animation people are starting to show ground truth with character walking next to a recording of a real person walking, for showing a comparison to ground truth. But this seems not be the case with fluids. PIV may work, but I doubt anything would show up. There has been work on tomography approaches using flourescent dyes and pouring fluids. This is used to estimate the thickness of the fluid, but with refraction, this problem becomes difficult. With PIV can observe particles, but there is refraction on the surface of the fluid, so if can't get a correct model for the surface, you cannot say much about particles in the fluid.

  • Idea - When things are in motion, you can get a doppler effect: might be able to capture this if you observe glowing fluids in motion. You can observe volume of fluid from different directions along a ray, can get all the velocities mapped to specific wavelengths, and sum up all the vectors in the fluid. Could you use this approach to estimate velocity fields for the fluid? For instance, could have glowing fluid with and label that reddish particles move away from the observer and blueish particles move toward.

  • For straightforward application of tomography, you need a scalar field, in this case it would only be along the projection ray.

  • What about using lenselets from more than one angle? Yeah - we've done some...you can do lense arrays in different arrangements for a camera array. For tomography, it's not going to be very good, though. This is the same as the turntable though setup Brad uses, though. What about getting different views of different, of colours HOWEVER would need many lasers to build one pixel - may not be plausible.

  • How feasable is it to start playing with different scanning devices, or are we stuck with cameras and current set-ups we have?

  • The nice thing about the cameras is we can quickly just try ideas out. If you want to use more precise or specific instruments, we'd better develop our theory first so we can just go and run tests. You can often only sign up for limited time slots on these types of equipment. For instance: using the fendrelsecond laser: it's the physics department's, and everyones renting per hour. It's difficult to get your hands on. The hurdle for getting at this is much greater than just getting cameras. You may have to get into collaborations with people who have this equipment. Which is no problem, could offer. Go for a few weeks, get some data and come back.

  • Anyway, there's the intermediate option where we just get new equipment: for instance, cheaper laser. First do the theory to prove that something works and motivate the purchase of such equipment, and then can apply for grant. Or can run feasability studies for equipment that somebody else has, and then buy it ourselves if we like the results we are getting.

  • Gordon: Biological microscope/iphone setup idea: put lenseet array on iphone since it has such a high resolution screen. Use this iphone/lenselet combination as the background illumination for a microscope. Put that under microscopic object. Similar has been done before, but not with reconstruction.

  • However, nowadays most good microscopes are not back illuminated.

  • Well, could be a cheap hardware choice and as a bonus we're throwing in the 4d code augmentation allowed by the lenselet array.

  • It might be interesting to show that you can get away with a cheap hardware solution, as has been done with other iphone applications, making this technology accessible to people who may not normally be able to afford it.

  • In terms of logistics for this application, there may not be a problem with refraction. The bottleneck in this situation wouldn't be pixel resolution, it could be the aperture of minifying lense.

  • I don't think the iphone will give you a large intensity.

  • Well, we have seen that cheap cell phones can replace really expensive equipment in optics. If we use this to replace some of the prohibitively expensive microscopy in medicine. What would be the expensive part, though? The microscope? It could be neat if we show that a highschool microscope plus iphone setup outperforms a more expensive setup.

  • Scientific contribution could be visualizing new unseen aspects of biological systems via Schlieren techniques.

  • Issue will be that you have only one view point.

  • If you just have one objective lens you only get 2d sampling of the 4d ray space.

  • The problem is you do 4d lenselet in the background, but only a 2d subset of this makes it to the camera, and you can't vary this subset. You need to position a light-field camera on top of the setup to fix this problem.

  • But does it get flakey at that scale? Well, we do have a micron-scale translation stage in the lab to use to test this out. It's highest resolution is about 4 microns.

  • James: Another application. I worked at a company in Burnaby that wanted to build a fusion reactor. Used liquid lead in a vessel that evacuates a central column, and shoot plasma into the centre. They then create sound waves from the outside that press the lead into middle and crush the plasma.

  • The problem is, they have no idea of how to track it check if it's working.

  • Could we use tomography here to take measurements of this process?

  • I guess we could use ultrasound? But now we're getting away from wave optics, and we just don't have much experience with that in the lab.

  • They were hoping to image it, somehow. Basically any information in the way of an image, or recording of what's happening during this process would be of great use to them.

  • The catch is, whatever you put on top to record this is going to get blown off, when they release the pressure near the end of the experiment.

  • If you can get one frame at the right time exposed correctly that would be helpful to them.

  • I wonder if they could get parallax. When you have liquid surface from lead so should be reflective. Could bounce tomography rays off this surface.

  • I know astronomy people have liquid lenses with mercury that spin to get smooth surface. You can't tilt it. These kind of spinning liquids if driven the right way should be very smooth.

  • Gallium in nitrogen atmosphere won't have skin, otherwise will.

  • What about RF sensors? For sensing distance. What would you be broadcasting and recording? Mostly a problem because of speed and framework. Might not be useful for reconstruction. What happens to light that is passed through plasma? Just absorbed? There's going to be a large magnetic field present, too, which is another consideration.

  • Even a single-shot setup could be enough, and and then do multiple setups.

  • Another Idea: I was wondering if we could reconstruction an ocean air interaction using tomography. Try and model tidal flow, web, after something like a tsunami. Could extrapolate from a small model setup to oceanography, and make contributions this way.

  • Maybe use the 3d printer to print a to-scale replica of bay and water to scale and image that. Do something like one surface reconstruction. Have to be careful of scale. Water density, and gravity considerations might not scale up well.

  • I have seen a set-up with model islands to reconstruct tide and web. This was used to simulate events in ocean. So I think it may be possible.

  • Nevertheless, exactly how you could scale it up is unclear. But they do do it for ships in test tanks. So you could still get useful information, but fine scale information may not match the real thing so well.

  • What about using satellite imagery for tomography? Can you pick up features like tsunamis quick enough? Not really. Could analyze what happens after. But there's noisy data. This is not tomography in a sense because no voxels. It would be challenging because you would need simultaneous exposures from different satellite. Resolving waves? May be able to average over many small waves to reconstruct.

  • However, large problem satellites - ultimate rolling shutter artefact problem. 1D sensor.

  • Cloud patterns? Scattering tomography. Anything outdoors is difficult to get necessary baselines.

  • For baseline, might not be a problem: we could just distribute cameras on the north shore and take pictures. But clouds are so thick rays might not pass through them.

  • If you can scale it up it may be helpful to work inside lab. It would, however, be very application specific. You'll need to find out EXACTLY what people want to learn from the experiment before specifying a setup.

  • Could measure waves from a hull form and compare this to your simulated wave to get a 3d reconstruction of waves (need tow tank).

  • Schlieren and water - the problem is water doesn't compress which is what Schlieren picks up.

  • Tomography of fluids. What about fluid mixtures? Tried water and corn syrup. Results in interesting patterns. Often animation people can't do it very well. Food industries want to measure this etc. To what degree do you really want to target a specific application? Or do you want to find some general solution that contributes to the way reconstructions are done. Would be a good technical contribution.

  • What were the big limitations to mixing? Brad - Turbulence in the image. Not able to image it fast enough. Small particles. Same kind of effect Gordon showed with water. The refractive index is so different and so large it changes the focus. If you wanted to get any robust quantitative measurement you would have to separate absorptions from the system. But in some of the applications we only care that the measurements are good enough. But there are commercial cameras that can do up to 30 fps at high resolutions. Don't know if these industrial cameras have the program suite for flexibility like cannon cameras do.

  • Some of the things we did with testing were close enough that you could get useful data from them - they were just out of focus. It was so finicky that we couldn't get it to work with the BOS.

  • Gordon: You could take a camera array and light field and image the object in front of an arbitrary diffuse object. Whenever you add defraction to that, the light field will be 4d. You could reconstruct the object by modifying constraints. You could do different types of reconstructions (not necessarily tomography) with a narrow camera baseline.

  • Another application for fluid modeling: Running gas pipelines. Gas leaks on pipeline. One needs to find it. How much gas is leaking? Need to do a flow rate estimation. Reconstruct the plume. Could use a one-sided camera array and doesn't need a reference background and scans. You then try to figure out from neighbouring scans whether there's a plume there.

  • The idea was to have a background with optical flow. But don't necessarily have a clean background. Optical flow might not be high resolution enough for this.

End of Meeting

Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View |  Raw edit | More topic actions
Topic revision: r2 - 2011-06-13 - JoelFerst
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback