Tags:
create new tag
view all tags

June 8 2011

Fluid Tomography: Bubble Trajectory

Goal:

  • Capture particle trajectories in fluids.

Methods:

  • Array of 8 video cameras.
  • Synchronized exposure.

Model:

  • Basically model the bubbles as little, homogeneous light sources, emitting light uniformly.
  • Assuming symmetry in light emission.
  • Currently ignoring shadowing of one bubble by another and vice-versa.

Discussion:

  • Question: Why are some bubble streaks dimmer than others?
  • Answers: Some bubbles may be traveling faster or slower. Bubbles may be different sizes. Bubbles may be closer/further from the camera.
  • Question: What type of model are you solving for?
  • Answer: We're solving for a light emission model of these bubbles.
  • Suggestion: The lighting seems uneven when looking at the cylinder holding the water and bubbles: It's brighter at the top and lighter at the bottom. You could correct for this by putting tinfoil at the bottom to even the lighting out.
  • Question: How much of the blur is due to depth of field and how much is due to the emission pattern of the bubbles?
  • Answers: Not exactly sure. Need to make sure we're not changing the focus/calibration settings of the camera between captures.
  • Question: Why are you doing this? What makes this work interesting/useful?
  • Answers: This is a spot in the design space that hasn't been explored (new knowledge).
  • Basically we could do PIV without the need for tracking.
  • Ultimately, the goal is to replace tracking with the production of optical streamlines that you can eventually segment out.
  • There are a lot of particles, but you don't have to track them individually because you have the trace records.
  • You can then take the streamlines, fit basis functions to them, and use them for fluid direction estimation, possibly.
  • Furthermore, the steam lines could compactly encode an infinite number of bubble positions along their path - this way we don't have to store many bubble point positions.
  • Suggestions: We need to be sure to use fast cameras to avoid gaps in the streamlines which will lead to it becoming a tracking problem again.
  • Question: Could there be a potential application of this to the Jellyfish tomography study?
  • Answers: Possibly could use points of water bubbles to interpolate jellyfish's motion.
  • Note: There hasn't been a multi-camera capture of the bubbles, yet.
  • Note: The setup for capturing the bubble streams is similar to those employed in the glass reconstruction methods.
  • Note: Camera capture setup: not an orthographic projection set-up: We have an astigmatism here because of the cylinder lens.
  • Note: May have to be careful, when looking through glass may lose any linearity you had before (Snell's Law).
  • Note: Rays of light in this situation behave in the following way: Parallel rays traveling orthogonal to and penetrating the thin vertical axis will travel straight through in parallel, rays on either side of the vertical axis will hit the cylindrical glass and converge.
  • Question: Could you resample this as an orthographic projection?
  • Answer: Probably not in this case: the camera set up is not dense enough.
  • Note: The encoding pattern on the background is robust to vertical but not horizontal shift.
  • Question: Do you think an unstructured reconstruction could be used for the bubble streams?
  • Answer: I don't think you'll benefit from unstructured reconstruction, here's why: the bubbles are mostly uniform. It's not worth doing an adaptive grid. We should just do a grid volume.
  • In this case - unlike in the jellyfish project - we can use a single volume reconstruction, with a single, long exposure.
  • Question: Is it possible to estimate the bubble diameter?
  • Answer: Maybe. There are stream-like segments so we could use brightness intensities to estimate this. Will be based on a non-linear function of bubble diameter and how far away from the camera the bubbles are, so could be difficult.
  • Question: Is the pipe in the set-up going to case problems?
  • Answers: Yes. For this reason may only be able to do 135 degree camera setup rather than 180.
  • Question: When you assemble the frames of the video captured, are the streams contiguous, connected, or disconnected?
  • Answer: If you use a max exposure period you can get 100% of field so should be contiguous.
  • Note: A final note for the jellyfish project: make sure you use back or top illumination, to avoid random specularities/noise in the frame.

End of Meeting

-- Main.joelaf - 08 Jun 2011

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf mike-fluid-tomography-brainstorm-lowres.pdf r1 manage 5733.1 K 2011-06-09 - 03:40 UnknownUser Bubble trajectories, (low-res) presentation
Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View |  Raw edit | More topic actions
Topic revision: r3 - 2011-06-09 - krim
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback