This three-lesson "legacy cycle" unit is structured with a contextually-based Grand Challenge followed by a sequence of instruction in which students first offer initial predictions (Generate Ideas) and then gather information from multiple sources (Multiple Perspectives). This is followed by the Research and Revise phase, as students integrate and extend their knowledge through a variety of learning activities. The cycle concludes with formative (Test Your Mettle) and summative (Go Public) assessments that lead students towards answering the challenge question. Research and concepts behind this way of learning may be found in How People Learn, (Bransford, Brown & Cocking, National Academy Press, 2000); see the entire text at
The legacy cycle is similar to the engineering design process in that they both involve identifying an existing societal need, applying science and math concepts and knowledge to develop solutions, and using the research conclusions to design a clear, conceived solution to the original challenge. The aim of both the engineering design process and the legacy cycle is to generate correct and accurate solutions, although the approaches vary somewhat in how a solution is devised and presented. See an overview of the engineering design process at
In Lesson 1, The Grand Challenge: Simulating Human Vision, students are prompted to brainstorm answers to the following Grand Challenge: "The Wall-e robotics firm thinks it has a unique and novel solution for getting a broader spectrum of usable data. Instead of using a single camera, they have mounted two cameras at different focal lengths on top of the robot. The first camera provides an up-close and detailed image (however, this image lacks surrounding data) and the second camera provides a broader view with less detail. However, they need this data to be usable by humans. Right now a human must look at two separate pictures. Could you somehow combine those images to simulate how a human's vision would focus in and out of the two focal lengths? How would you accomplish this task?" Then, students enter the Research and Revise step, focusing on how human vision is different from that of a camera. Students complete the Peripheral Vision Lab activity, which helps them see the limitations of peripheral vision on robots using camera lenses. Students also see how the focal length of lenses on cameras affects the field of view for robots.
Lesson 2, What Makes up Color, and its associated activity, RGB to Hex Conversions, students return to the Research and Revise step for further learning. As part of the lesson and activity, teacher instruction and example problems on computer image composition and RGB conversions are provided. Students are given the skills necessary to make the required calculations, including ample practice with these calculations in the activity.
In Lesson 3, How Do You Store All This Data?, the Research and Revise step comes to a close as students are instructed on how two-dimensional arrays work and how the vector class allows programmers to use the same concept but with a dynamic container.
In the final activity, Putting It All Together, students are given six days in the computer lab to write code in order to answer the Grand Challenge question that was posed in Lesson 1. The teacher spends time instructing and guiding them through this process. Students must use their knowledge of how human eyes see in order to understand which digital images model human vision at different locations in the viewing area, then use their knowledge of data storage and combination or averaging of pixels in order to store data from those images appropriately before combining them to create a final simulation.
In sum, this unit connects computer science to engineering by incorporating several science topics (eye anatomy, physics of light and color, mathematics, and science of computers) and guides students through the design process to create final simulations.