Difference: TomographJune112011 (1 vs. 2)

Revision 22011-06-13 - JoelFerst

Line: 1 to 1
 
META TOPICPARENT name="TomographGrpMins"

June 11 2011

Tomography Brainstorming Session.

Deleted:
<
<
NOTE: Needs to be edited. -JF
  Ideas
Changed:
<
<
State of scanning pouring liquids. Get qualitative view but often not dense quantitative info Success in quant flow? Do PIV in the interior?
>
>
  • I'm wondering about the current state of scanning pouring liquids? We often get a qualitative description of the system, but often not dense quantitative descriptions.
  • Has there been much success in quantitative fluid flow descriptions? Can you do PIV in the interior of fluids?

  • A GeorgiaTech Siggraph submission in 2009 put dye in water and used a video projector pattern and stereoscopic captures and combined this with predictions from fluid solvers to get fludi shape. But this is limited because you only can scan what you see.
 
Changed:
<
<
A GeorgiaTech Sigg 2009 put dye in water and use vid projector pattern use stereo combine that with predictions from fluid solvers to get shape limited because only can scan what you see
>
>
  • There's a lot of numerical prediction in this area, but a lot of these current methods don't handle estimation robustly.
 
Changed:
<
<
There's a lot of num predi, but don't handle esimtation robustly.
>
>
  • Furthermore, there are no solid benchmark cases and quantitative descriptions used in fluids.
 
Changed:
<
<
No solid benchmark cases and quant descriptions
>
>
  • One of the applications for computer science for fluid image techniques could be for creating ground truth data sets for fluid simulator. Fluid simulators cannot even specify units, currently. They're considered, "physically plausible", or "looking" but not "actual". A current Siggraph paper by Tyson couldn't answer whether or not they had a real viscosity model. We could try and contribute in this area.
 
Changed:
<
<
ONe of the applica for cpsc for fluid image techniques could be as ground truth data set for fluid simulator. Fluid simulators cannot even specify units. PHysically plausible or looking but not actual. Sigg Paper Tyson couldn't answer whether had a real viscosity model
>
>
  • Verification was a topic of a recent conference. Verifying computer animations of fluids was a big question. Character animation people are starting to show ground truth with character walking next to a recording of a real person walking, for showing a comparison to ground truth. But this seems not be the case with fluids. PIV may work, but I doubt anything would show up. There has been work on tomography approaches using flourescent dyes and pouring fluids. This is used to estimate the thickness of the fluid, but with refraction, this problem becomes difficult. With PIV can observe particles, but there is refraction on the surface of the fluid, so if can't get a correct model for the surface, you cannot say much about particles in the fluid.
 
Changed:
<
<
Verification was a topic of recent conference. Verifying animation fluids big question. Character animation people show ground truth with character walking next to real person. But not fluids. PIV may work, but doubt anything would show up. There has been work on tomography approach using flourescent dyes and pouring fludis. Used to est thickness but with refraction makes diff. W piv can observe particle, but refract on surface so if can't get surface well can say much about particles.
>
>
  • Idea - When things are in motion, you can get a doppler effect: might be able to capture this if you observe glowing fluids in motion. You can observe volume of fluid from different directions along a ray, can get all the velocities mapped to specific wavelengths, and sum up all the vectors in the fluid. Could you use this approach to estimate velocity fields for the fluid? For instance, could have glowing fluid with and label that reddish particles move away from the observer and blueish particles move toward.
 
Changed:
<
<
Idea - When things in motion doppler effect: if you could observe glow - don't you get into relativistic - can you do this in 2d way. Observe vol of fluid from diff direction along the ray will get all the vel mapped to spec wavelenght an summ up all vec. could you use to estimate velocity field. Have glowing fluid and particles reddish moves away blueish moves toward.
>
>
  • For straightforward application of tomography, you need a scalar field, in this case it would only be along the projection ray.
 
Changed:
<
<
For strateforward app of tomo need scalar field in this case it would only be along the projection ray.
>
>
  • What about using lenselets from more than one angle? Yeah - we've done some...you can do lense arrays in different arrangements for a camera array. For tomography, it's not going to be very good, though. This is the same as the turntable though setup Brad uses, though. What about getting different views of different, of colours HOWEVER would need many lasers to build one pixel - may not be plausible.
 
Changed:
<
<
Lenselets from more than one angle? Yeah - we've done some...you can do lense arrays different arrangement for camera array. For tomography not going to be very good. Same as turntable though... Different views different colours different colours but would need many lasers to build one pixel.
>
>
  • How feasable is it to start playing with different scanning devices, or are we stuck with cameras and current set-ups we have?
 
Changed:
<
<
How feasable is it to start playing with different scanning devices, or ar we stuck with cameras?
>
>
  • The nice thing about the cameras is we can quickly just try ideas out. If you want to use more precise or specific instruments, we'd better develop our theory first so we can just go and run tests. You can often only sign up for limited time slots on these types of equipment. For instance: using the fendrelsecond laser: it's the physics department's, and everyones renting per hour. It's difficult to get your hands on. The hurdle for getting at this is much greater than just getting cameras. You may have to get into collaborations with people who have this equipment. Which is no problem, could offer. Go for a few weeks, get some data and come back.
 
Changed:
<
<
If have camera can just try it out. Get better measurements devel theory. Using fendrelsecond laser, it's the physics department's, and everyones renting per hour. Difficult to get hands on. Hurdle for this is much greater than getting camera. Getting into collaborations. Which is no problem, could offer.Go for a few weeks, get some data and come back.
>
>
  • Anyway, there's the intermediate option where we just get new equipment: for instance, cheaper laser. First do the theory to prove that something works and motivate the purchase of such equipment, and then can apply for grant. Or can run feasability studies for equipment that somebody else has, and then buy it ourselves if we like the results we are getting.
 
Changed:
<
<
Anyways, there's the intermediate option where we just get new equipment laser. First theory done prove that something works, and can apply grant. Run feasability studies for equipment that somebody else has and buy it.
>
>
  • Gordon: Biological microscope/iphone setup idea: put lenseet array on iphone since it has such a high resolution screen. Use this iphone/lenselet combination as the background illumination for a microscope. Put that under microscopic object. Similar has been done before, but not with reconstruction.
 
Changed:
<
<
Gord: Microscpo iphone put lenseet array on it. Use as background illumination for microscope. Put that under microscopic object. Similar has been done, but not with reconstruction.
>
>
  • However, nowadays most good microscopes are not back illuminated.
 
Changed:
<
<
All good microscopes are not back illuminated not.
>
>
  • Well, could be a cheap hardware choice and as a bonus we're throwing in the 4d code augmentation allowed by the lenselet array.
 
Changed:
<
<
Well, could be a cheap hardware choice and through in 4d codes.
>
>
  • It might be interesting to show that you can get away with a cheap hardware solution, as has been done with other iphone applications, making this technology accessible to people who may not normally be able to afford it.
 
Changed:
<
<
Might be interesting to show that you can get away with cheap hardware solution.
>
>
  • In terms of logistics for this application, there may not be a problem with refraction. The bottleneck in this situation wouldn't be pixel resolution, it could be the aperture of minifying lense.
 
Changed:
<
<
May not be problem with refraction. Bottleneck wouldn't be pixel resolution. COuld be aperture of minifying lense.
>
>
  • I don't think the iphone will give you a large intensity.
 
Changed:
<
<
I don't think the iphone will give you a large intensity
>
>
  • Well, we have seen that cheap cell phones can replace really expensive equipment in optics. If we use this to replace some of the prohibitively expensive microscopy in medicine. What would be the expensive part, though? The microscope? It could be neat if we show that a highschool microscope plus iphone setup outperforms a more expensive setup.
 
Changed:
<
<
We have seen that cheap cell phones can replace really expensive equipment in optics. If we use this to replace some of the exp microscopy in medicine. What would be the expensive part? Microscope? Highschool microscope plus iphone setup versus exp setup
>
>
  • Scientific contribution could be visualizing new unseen aspects of biological systems via Schlieren techniques.
 
Changed:
<
<
Scientific contribution could be visualizing new unseen aspects of biological systems.
>
>
  • Issue will be that you have only one view point.
 
Changed:
<
<
Issue will be that you have only one view point.
>
>
  • If you just have one objective lens you only get 2d sampling of the 4d ray space.
 
Changed:
<
<
If you just have one objective lens you only get 2d sampling of 4d ray space.
>
>
  • The problem is you do 4d lenselet in the background, but only a 2d subset of this makes it to the camera, and you can't vary this subset. You need to position a light-field camera on top of the setup to fix this problem.
 
Changed:
<
<
The problem is you do 4d lenselet in the background but only a 2d subset into the camera, and can't vary this subset. Need lightfield camera for this.
>
>
  • But does it get flakey at that scale? Well, we do have a micron-scale translation stage in the lab to use to test this out. It's highest resolution is about 4 microns.
 
Changed:
<
<
Does it get flakey at that scale? Do have micron scale translation stage in the lab to use. Highest resolution 4 microns.
>
>
  • James: Another application. I worked at a company in Burnaby that wanted to build a fusion reactor. Used liquid lead in a vessel that evacuates a central column, and shoot plasma into the centre. They then create sound waves from the outside that press the lead into middle and crush the plasma.
 
Changed:
<
<
Applicaiton. Worked at company in buranby that wanted to build a fusion reactor. Liquid lead, that evacuates a central column, and shoot plasma into the centre. Create waves that press lead into middle and crush plasma.
>
>
  • The problem is, they have no idea of how to track it check if it's working.
 
Changed:
<
<
Have no idea of how to track it check if it's working.
>
>
  • Could we use tomography here to take measurements of this process?
 
Changed:
<
<
Tomography to make measurements?
>
>
  • I guess we could use ultrasound? But now we're getting away from wave optics, and we just don't have much experience with that in the lab.
 
Changed:
<
<
Ultrasound? Not wave optics.
>
>
  • They were hoping to image it, somehow. Basically any information in the way of an image, or recording of what's happening during this process would be of great use to them.
 
Changed:
<
<
They were hoping to image it.
>
>
  • The catch is, whatever you put on top to record this is going to get blown off, when they release the pressure near the end of the experiment.
 
Changed:
<
<
The catch is, whatever you put on top is going to get blown off.
>
>
  • If you can get one frame at the right time exposed correctly that would be helpful to them.
 
Changed:
<
<
If you can get one frame at the right time exposed correctly that would be helpful to them.
>
>
  • I wonder if they could get parallax. When you have liquid surface from lead so should be reflective. Could bounce tomography rays off this surface.
 
Changed:
<
<
I wonder if they could get parallax. When you have liquid surface from lead so should be reflective. Could bounce tomography rays off this surface.
>
>
  • I know astronomy people have liquid lenses with mercury that spin to get smooth surface. You can't tilt it. These kind of spinning liquids if driven the right way should be very smooth.
 
Changed:
<
<
I know astronomy people have liquid lenses with mercury that spin to get smooth surface. You can't tilt it. These kind of spinning liquids if driven the right way should be very smooth.
>
>
  • Gallium in nitrogen atmosphere won't have skin, otherwise will.
 
Changed:
<
<
Gallium in nitrogen atmosphere won't have skin, otherwise will.
>
>
  • What about RF sensors? For sensing distance. What would you be broadcasting and recording? Mostly a problem because of speed and framework. Might not be useful for reconstruction. What happens to light that is passed through plasma? Just absorbed? There's going to be a large magnetic field present, too, which is another consideration.
 
Changed:
<
<
What about RF sensors? For sensing distance. What would you be broadcasting and recording? Mostly a problem because of speed and framework. Might not be useful for reconstruction. What happens to light that is passed through plasma? Just absorbed? Also large magnetic field.
>
>
  • Even a single-shot setup could be enough, and and then do multiple setups.
 
Changed:
<
<
Even single shot setup could be enough and do multiple.
>
>
  • Another Idea: I was wondering if we could reconstruction an ocean air interaction using tomography. Try and model tidal flow, web, after something like a tsunami. Could extrapolate from a small model setup to oceanography, and make contributions this way.
 
Changed:
<
<
I was wondering if we could reconstruction an ocean air interaction using tomography. tide web tsunami. Could extrapolate to oceanography.
>
>
  • Maybe use the 3d printer to print a to-scale replica of bay and water to scale and image that. Do something like one surface reconstruction. Have to be careful of scale. Water density, and gravity considerations might not scale up well.
 
Changed:
<
<
Interesting application. 3d printer to print to scale replica of back and water to scale and imaging that. Do something like one surface reconstruction. Have to be careful of scale. Water gravity consideration.
>
>
  • I have seen a set-up with model islands to reconstruct tide and web. This was used to simulate events in ocean. So I think it may be possible.
 
Deleted:
<
<
Have seen set up before with islands to reconstruct tide and web. Help to simulate event in ocean.
 
Added:
>
>
  • Nevertheless, exactly how you could scale it up is unclear. But they do do it for ships in test tanks. So you could still get useful information, but fine scale information may not match the real thing so well.
 
Deleted:
<
<
Exactly how you could scale it up is unclear. Do it for ships in test tanks. So you could still get useful information but fine scale information may not match the real thing so wall.
 
Added:
>
>
  • What about using satellite imagery for tomography? Can you pick up features like tsunamis quick enough? Not really. Could analyze what happens after. But there's noisy data. This is not tomography in a sense because no voxels. It would be challenging because you would need simultaneous exposures from different satellite. Resolving waves? May be able to average over many small waves to reconstruct.
 
Changed:
<
<
What about using satellite imagery for tomography? Can you pick up features like tsunami's quick enough? Not really. Could analyze what happens after. Noisy data. Not tomography in a sense because no voxels. Be challenging would need simult exposures from different sat. Resolving waves? May be able to average over many small waves...
>
>
  • However, large problem satellites - ultimate rolling shutter artefact problem. 1D sensor.
 
Changed:
<
<
Satellites - ultimate rolling shutter artefact problem. 1D sensor.
>
>
  • Cloud patterns? Scattering tomography. Anything outdoors is difficult to get necessary baselines.
 
Changed:
<
<
Cloud patterns? Scattering tomography. Anything outdoors is difficult to get necessary baselines.
>
>
  • For baseline, might not be a problem: we could just distribute cameras on the north shore and take pictures. But clouds are so thick rays might not pass through them.
 
Changed:
<
<
Baseline? Could distribute cameras on north shore. But cloud so thck ray might not pass trhough.
>
>
  • If you can scale it up it may be helpful to work inside lab. It would, however, be very application specific. You'll need to find out EXACTLY what people want to learn from the experiment before specifying a setup.
 
Changed:
<
<
If you can scale it up it may be helpful to work inside lab. Would be application specific. FInd out exactly what people want to learn from the experiment.
>
>
  • Could measure waves from a hull form and compare this to your simulated wave to get a 3d reconstruction of waves (need tow tank).
 
Changed:
<
<
Could measure wave from hull form and compare to your simulated wave to get 3d reconstruction of wave (need tow tank).
>
>
  • Schlieren and water - the problem is water doesn't compress which is what Schlieren picks up.
 
Changed:
<
<
Schlieren and water - problem water doesn't compress which is what schlieren picks up.
>
>
  • Tomography of fluids. What about fluid mixtures? Tried water and corn syrup. Results in interesting patterns. Often animation people can't do it very well. Food industries want to measure this etc. To what degree do you really want to target a specific application? Or do you want to find some general solution that contributes to the way reconstructions are done. Would be a good technical contribution.
 
Changed:
<
<
Tomography of fluids. Fluid mixtures. Water and corn syrup. Results in interesting patterns. Often animation people can't do it very well. food industries want to measure this etc. To do what degree do you really want to target a specific application. Or find some general solution that contribute tot he way reconstructions are done. Would be a good technical contribution.
>
>
  • What were the big limitations to mixing? Brad - Turbulence in the image. Not able to image it fast enough. Small particles. Same kind of effect Gordon showed with water. The refractive index is so different and so large it changes the focus. If you wanted to get any robust quantitative measurement you would have to separate absorptions from the system. But in some of the applications we only care that the measurements are good enough. But there are commercial cameras that can do up to 30 fps at high resolutions. Don't know if these industrial cameras have the program suite for flexibility like cannon cameras do.
 
Changed:
<
<
What were the big limitations to mixing? Turbulence in the image. Not able to image it fast enough. Small particles. Same kind of effect gordon showed with water. Ref index so diff so large it changes the focus. If you wanted to get any robust quant meas would have to separate absorptions. But in some of the applications only care that the measurements are good enough. But there are commercial cameras that can do 30 fps at high res. Don't know if have the program suite for flexibility like cannon.
>
>
  • Some of the things we did with testing were close enough that you could get useful data from them - they were just out of focus. It was so finicky that we couldn't get it to work with the BOS.
 
Changed:
<
<
Some of things we did with testing were close enough that you could useful data from them - they were just out of focus. It was so finicky that we couldn't get it to work with the BOS.
>
>
  • Gordon: You could take a camera array and light field and image the object in front of an arbitrary diffuse object. Whenever you add defraction to that, the light field will be 4d. You could reconstruct the object by modifying constraints. You could do different types of reconstructions (not necessarily tomography) with a narrow camera baseline.
 
Changed:
<
<
Gordon. Take camera array and light field and image object in front of arbitrary diffuse object. Whenever you add defraction to that, light field will be 4d. Could reconstruct object by modifying constraints. Could do different types of reconstructions (not necessarily tomography) with narrow cam baseline.
>
>
  • Another application for fluid modeling: Running gas pipelines. Gas leaks on pipeline. One needs to find it. How much gas is leaking? Need to do a flow rate estimation. Reconstruct the plume. Could use a one-sided camera array and doesn't need a reference background and scans. You then try to figure out from neighbouring scans whether there's a plume there.
 
Changed:
<
<
In running gas pipelines. Gas leak on pipeline. Need to find it. How much gass leaking. Do flow rate estimation. Reconstruct the plume. One-sided camera array and doesn't need ref background and scans tries to figure out from neighbouring scans whether there's a plume there.
>
>
  • The idea was to have a background with optical flow. But don't necessarily have a clean background. Optical flow might not be high resolution enough for this.
 
Changed:
<
<
Idea was to have background with opt flow. But don't necessarily have clean background. Opt flow might not be high res enough for this.
>
>
End of Meeting
 

Revision 12011-06-10 - JoelFerst

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="TomographGrpMins"

June 11 2011

Tomography Brainstorming Session.

NOTE: Needs to be edited. -JF

Ideas

State of scanning pouring liquids. Get qualitative view but often not dense quantitative info Success in quant flow? Do PIV in the interior?

A GeorgiaTech Sigg 2009 put dye in water and use vid projector pattern use stereo combine that with predictions from fluid solvers to get shape limited because only can scan what you see

There's a lot of num predi, but don't handle esimtation robustly.

No solid benchmark cases and quant descriptions

ONe of the applica for cpsc for fluid image techniques could be as ground truth data set for fluid simulator. Fluid simulators cannot even specify units. PHysically plausible or looking but not actual. Sigg Paper Tyson couldn't answer whether had a real viscosity model

Verification was a topic of recent conference. Verifying animation fluids big question. Character animation people show ground truth with character walking next to real person. But not fluids. PIV may work, but doubt anything would show up. There has been work on tomography approach using flourescent dyes and pouring fludis. Used to est thickness but with refraction makes diff. W piv can observe particle, but refract on surface so if can't get surface well can say much about particles.

Idea - When things in motion doppler effect: if you could observe glow - don't you get into relativistic - can you do this in 2d way. Observe vol of fluid from diff direction along the ray will get all the vel mapped to spec wavelenght an summ up all vec. could you use to estimate velocity field. Have glowing fluid and particles reddish moves away blueish moves toward.

For strateforward app of tomo need scalar field in this case it would only be along the projection ray.

Lenselets from more than one angle? Yeah - we've done some...you can do lense arrays different arrangement for camera array. For tomography not going to be very good. Same as turntable though... Different views different colours different colours but would need many lasers to build one pixel.

How feasable is it to start playing with different scanning devices, or ar we stuck with cameras?

If have camera can just try it out. Get better measurements devel theory. Using fendrelsecond laser, it's the physics department's, and everyones renting per hour. Difficult to get hands on. Hurdle for this is much greater than getting camera. Getting into collaborations. Which is no problem, could offer.Go for a few weeks, get some data and come back.

Anyways, there's the intermediate option where we just get new equipment laser. First theory done prove that something works, and can apply grant. Run feasability studies for equipment that somebody else has and buy it.

Gord: Microscpo iphone put lenseet array on it. Use as background illumination for microscope. Put that under microscopic object. Similar has been done, but not with reconstruction.

All good microscopes are not back illuminated not.

Well, could be a cheap hardware choice and through in 4d codes.

Might be interesting to show that you can get away with cheap hardware solution.

May not be problem with refraction. Bottleneck wouldn't be pixel resolution. COuld be aperture of minifying lense.

I don't think the iphone will give you a large intensity

We have seen that cheap cell phones can replace really expensive equipment in optics. If we use this to replace some of the exp microscopy in medicine. What would be the expensive part? Microscope? Highschool microscope plus iphone setup versus exp setup

Scientific contribution could be visualizing new unseen aspects of biological systems.

Issue will be that you have only one view point.

If you just have one objective lens you only get 2d sampling of 4d ray space.

The problem is you do 4d lenselet in the background but only a 2d subset into the camera, and can't vary this subset. Need lightfield camera for this.

Does it get flakey at that scale? Do have micron scale translation stage in the lab to use. Highest resolution 4 microns.

Applicaiton. Worked at company in buranby that wanted to build a fusion reactor. Liquid lead, that evacuates a central column, and shoot plasma into the centre. Create waves that press lead into middle and crush plasma.

Have no idea of how to track it check if it's working.

Tomography to make measurements?

Ultrasound? Not wave optics.

They were hoping to image it.

The catch is, whatever you put on top is going to get blown off.

If you can get one frame at the right time exposed correctly that would be helpful to them.

I wonder if they could get parallax. When you have liquid surface from lead so should be reflective. Could bounce tomography rays off this surface.

I know astronomy people have liquid lenses with mercury that spin to get smooth surface. You can't tilt it. These kind of spinning liquids if driven the right way should be very smooth.

Gallium in nitrogen atmosphere won't have skin, otherwise will.

What about RF sensors? For sensing distance. What would you be broadcasting and recording? Mostly a problem because of speed and framework. Might not be useful for reconstruction. What happens to light that is passed through plasma? Just absorbed? Also large magnetic field.

Even single shot setup could be enough and do multiple.

I was wondering if we could reconstruction an ocean air interaction using tomography. tide web tsunami. Could extrapolate to oceanography.

Interesting application. 3d printer to print to scale replica of back and water to scale and imaging that. Do something like one surface reconstruction. Have to be careful of scale. Water gravity consideration.

Have seen set up before with islands to reconstruct tide and web. Help to simulate event in ocean.

Exactly how you could scale it up is unclear. Do it for ships in test tanks. So you could still get useful information but fine scale information may not match the real thing so wall.

What about using satellite imagery for tomography? Can you pick up features like tsunami's quick enough? Not really. Could analyze what happens after. Noisy data. Not tomography in a sense because no voxels. Be challenging would need simult exposures from different sat. Resolving waves? May be able to average over many small waves...

Satellites - ultimate rolling shutter artefact problem. 1D sensor.

Cloud patterns? Scattering tomography. Anything outdoors is difficult to get necessary baselines.

Baseline? Could distribute cameras on north shore. But cloud so thck ray might not pass trhough.

If you can scale it up it may be helpful to work inside lab. Would be application specific. FInd out exactly what people want to learn from the experiment.

Could measure wave from hull form and compare to your simulated wave to get 3d reconstruction of wave (need tow tank).

Schlieren and water - problem water doesn't compress which is what schlieren picks up.

Tomography of fluids. Fluid mixtures. Water and corn syrup. Results in interesting patterns. Often animation people can't do it very well. food industries want to measure this etc. To do what degree do you really want to target a specific application. Or find some general solution that contribute tot he way reconstructions are done. Would be a good technical contribution.

What were the big limitations to mixing? Turbulence in the image. Not able to image it fast enough. Small particles. Same kind of effect gordon showed with water. Ref index so diff so large it changes the focus. If you wanted to get any robust quant meas would have to separate absorptions. But in some of the applications only care that the measurements are good enough. But there are commercial cameras that can do 30 fps at high res. Don't know if have the program suite for flexibility like cannon.

Some of things we did with testing were close enough that you could useful data from them - they were just out of focus. It was so finicky that we couldn't get it to work with the BOS.

Gordon. Take camera array and light field and image object in front of arbitrary diffuse object. Whenever you add defraction to that, light field will be 4d. Could reconstruct object by modifying constraints. Could do different types of reconstructions (not necessarily tomography) with narrow cam baseline.

In running gas pipelines. Gas leak on pipeline. Need to find it. How much gass leaking. Do flow rate estimation. Reconstruct the plume. One-sided camera array and doesn't need ref background and scans tries to figure out from neighbouring scans whether there's a plume there.

Idea was to have background with opt flow. But don't necessarily have clean background. Opt flow might not be high res enough for this.

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback