Camera Hardware

Fall 2023

You are not logged in.

Please Log In for full access to the web site.
Note that this link will take you to an external site (https://shimmer.mit.edu) to authenticate, and then you will be redirected back to this page.

Lab Outline

Goals

  • Explores the DVP protocol specified by the OV5640 protocol, and walks through the process of decoding what the camera gives you.
  • Builds up all the necessary components to get to successful video output, rather than focusing on what one can do with the video output
  • Alludes heavily to the process of figuring out how to decode hardware output via understanding datasheets; guided steps through the datasheet, connecting things there to what's been learned about in lecture/past labs
  • Utilizes past lab of HDMI output to display video results, as well as
  • Utilizes (assumed) past lab of I2C communication to build basis of SCCB communication
  • Builds up to future labs that get to delve more into analysis of video data

Start by walking students through building modules that interpret DVP from the ground up, through exercises that thoroughly test their modules

Exercise 1: DVP Receiver

  • Build a module that can take a frame of camera data and map it to its appropriate coordinates (hcount and vcount values)
  • Equivalent to the current recover.sv module for the OV7670
  • Draw parallels between HDMI output of previous lab and DVP input of this lab--rather than generating a video signal sequence like in video_sig_gen, we're reconstructing the data being transmitted in this format
  • Assume that we have a module already building up a valid signal + 16 bit pixel data (as if camera.sv has already operated on the data)
  • Test data for module: a variety of signals based on raw camera data (assuming pixel reconstruction is already done) that can be hooked up to a simulated BRAM to display images

Exercise 2: DVP Pixel Reconstructor

  • Build a module that can detect rising edges of the pixel clock, and collect the two adjacent bytes of a given pixel
  • Equivalent to the current camera.sv module for the OV7670
  • Assumes that the signal has already been passed through a couple registers (synchronizer.sv) to eliminate metastability issues, but no other pre-processing (but also explain what synchronizer is doing!)
  • Build some logic to properly align with the start of each row which byte is high and which byte is low
  • Test data for module: actual raw capture data from an already-configured camera, in different settings generating different images or test patterns.

Exercise 3: SCCB Configuration

  • Build a module to take data from a ROM and transmit it via I2C(SCCB) in a way that
  • Designed to fit the format provided by our startup register sequence python script
  • Relies on existing code for I2C in an imaginary lab that would come before this, so it's not building the protocol from the ground up, just a wrapper with a state machine for configuring the camera
  • Requires designing a state machine, but without much worry of dead cycles and efficiency given the slow time scale of I2C
  • Relies on knowledge of ROMs coming from something like popcat in the HDMI lab
  • Test it with a pre-built I2C peripheral to make sure things are being read properly.

Checkoff 1: Hardware Startup

  • Piece the existing modules together in order to actually get the camera running (not yet incorporating the display of the output)
  • Walk through the providing of an external clock via one of the pins, and giving it a strong drive in the xdc file
  • top_level provided that connects the output of the above exercises to a seven-segment display output showing the highest hcount+vcount values being reached--if the values displayed match what's expected from the setup (e.g. 320 hcount and 240 vcount if set to 240p), then the camera configuration was successful!
  • Display the rgb value from a specific pixel location on the board RGB, demonstrate it changing appropriately when you cover the camera or put a lot of a certain color in front of it

Checkoff 2: Video Display

  • Work with a new top_level module that builds in pieces from the HDMI lab to get a basic video display going
  • Build a scale module to interact between a frame buffer BRAM port and HDMI output
  • No complicated mux for video display, just a choice between black background and the video frame buffer. Save the mux for when more complex video things are being done
  • Students demonstrate the video playing in real time, at different scale values.
  • (Very intentionally doing basically no analysis of the actual camera data--that's saved for a following week's lab, since this lab had such a heavy focus on just getting it working and understanding all the things that go into that)

Things to Try

  • Have file upload for exercises, rather than a text box? Get people to not just write in the online submission box, like I did early on in this class...
  • Explore the best ways for testing code in the early exercises