Computational Imaging

by | Design

Results of a workshop in Tucson Arizona, November 2022 , 12/10

Contents

  • Edmunds Scientific Computational Imaging Roundtable   1
  • Executive Summary. 1
  • Computational Imaging. 2
  • Biological Computational Imaging. 3
  • Computational Imaging directions. 5
  • The Tucson Event. 6
  • Edmunds personnel-inspired outcomes. 6
  • Participant-inspired outcomes. 7
  • Applications. 9
  • Conclusions. 10
  • Future Roundtables. 11

 

Ted Selker, Selker Design Research, 

Executive Summary

The Edmund Scientific Computational Imaging Roundtable brought together a group of legendary optical engineers, camera builders, camera integrators, professors, and futurists to discuss the state of Computational Imaging and how it now affects, and might soon affect, the optics and camera industries. 

We were asked to try our hand at editing an inclusive definition of Computational Imaging, I wrote the following:

Computational Imaging is defining scenarios that marry optical sensing and computation to understand spatial-temporal solutions and reducing effort for design, manufacturing, and energy conservation. These properties create increased capability for solutions such as recording and acting on spatial temporal experiences with non-contacting sensors.  Computational Imaging are often used in conjunction with other sensory, physical, and effectors systems in support of human needs.

We discussed the exotic and up-and-coming cameras, such as cameras with on board processing, sensors and cameras that encode data in specialized ways, cameras that use Computational Imaging to take advantage of novel optics, cameras in different spectra, and the importance of other sensory integration to value.  As well as being able to best Lidar and standard cameras for navigation applications, the plummeting costs of cameras with Computational Imaging are driving amazing new capabilities and pervasively available applications.

We talked about analog and digital conditioning and computation on the camera module.  These circuits are a camera stack that can reduce noise and simplify communication, while adding new functions.  Establishing provenance with image watermarks, for example, is a different direction than interpreted imagery, that will be central to use of camera outputs for any documentation that they are to be relied on to produce.

A call was made for establishing camera design toolkits that allow student and professional camera designers to create complete physics-to-AI camera-sensor stacks for sensory-integration solutions. A dream was presented that high end camera customers should collaborate instead of going it alone in defining better cameras that can be made quicker and cheaper. 

Discussion of CT scan Xrays, MRI (magnetic resonance), ultrasound, and other energy-field  imaging made clear that Computational Imaging is broader than optical sensing.  Sensory integration will be as central to robots as it is for animals.  

The huge variety of biologic eyes and their use as part of sensory fusion inspired this report as well. 

Especially exciting are all the new places imaging will be used, with new ways of guiding the imaging and new kinds of simplification of cameras such as onboard signal processing, calibration, control, etc. 

Everyone felt deeply honored to be part of the roundtable.  I imagine productive events like this could be seminal.  I sketch a list of possible roundtable event topics in the Conclusion.

Computational Imaging

We are far beyond a family sharing a camera as their prized possession. The industry is currently producing more than one camera per person in the world per year. What does it mean to drive meaningful use of billions of cameras? Is the boutique design market the right place for a quality design house or should they position their tools to best take part in the largest markets?

For more than 200 years a camera has been a magical thing that reproduced and made a projection of what it viewed in the world.  We are now deeply engaged in transforming what we use cameras and other sensors for.  Starting with spatial light meters, autofocusing, and more recently face finding, more and more cameras interpret what their sensors record. They are starting to be specialized, though not yet to the degree of animal eye specialization.  From better images to surveillance, they are tuned to find anomalies.  No longer are they simply descriptive, but increasingly discriminative in service of goal-oriented systems. 

Computational Imaging has been with us for some time. In 1976 Weyerhaeuser used a CCD illuminated by structured light to analyze boards in a factory[1]. It increased usable lumber by 15%. Even then image-based focusing was being demonstrated[2].  Yet 45 years later, it still feels like the beginning of the impact of Computational Imaging.  The impacts are, and will be, immense.  Tools like Code V[3] and COMSOL[4] give optical designers huge improvements in their design process, speed, and quality.  Automated machine centers like Moore Nanotech[5] provide simplified paths to precision tests of optical ideas.  Software systems like OpenCV[6] allow us to easily use Computational Imaging in unlimited ways.  Such tools form some of the infrastructure for modern camera design.

Imaging systems today produce solutions, more than photos. While novel cameras sometimes use optics as well as sensors in new ways, the group agreed that physical lenses are here to stay.  Computational Imaging will play a central role in every use of optics, reducing and interpreting the images for compression and interpretation. Technical conferences such as ICLR and CVPR include papers on Computational Imaging topics, ranging from depth from Nerfs, to focus, to sensor fusion, to human detection, to intent interactions, to beam shaping, to diffractive elements, to meta optics, to coded apertures, to computational projection, etc. 

Fancy techniques get their value relative to applications they enable Biological eyes show a myriad of techniques that work in different applications.

Biological Computational Imaging

Biology has produced a myriad of imaging solutions.  Different eyes give different world views and affordances. The ocular, and computational, interpretations of various animal eyes are central to the kinds of things the animal can do and in their environment.

I include pictures of a small sample of optical configurations of complex eyes. Eyes below all include one optical element in a single axis of motion. 

Other animal eyes are even more exotic, with many optical elements for their various sensors, that may move on a stalk or even be deep inside the creature.  The eye might have other sensors in it; for example, recent research points to the eye as part of magnetic sensing in many animals.

A great sensor can reduce computation needed to solve a visual problem. Animals’ eyes are tuned to solve the visual problems in the ecological niches they occupy.  Their solutions tradeoff motion, resolution, spectral discrimination, local and central computation, and sensory integration. Animal eyes use specialized lenses, receptors, mirrors, motion, spacing, and shape to help their eyes interpret the world. Animals’ mechanisms for interpreting the world around them rely on optical tricks, but many also use computation to solve their imaging problems as well.  Like some modern cameras, animals place their optical sensor close to the processor for data throughput and reduced latency, or they delegate computation and control to peripheral eyes. The variety of specialization in computational systems in animal eyes is awe-inspiring[7]. 

Many animal imaging problems are solved with non-computational solutions. The giant squid has a 27 cm diameter eye for low-light collection hundreds of meters below the surface of the ocean. Deep sea eyes are large to accept more light, but are blinded by too much light.   Many animal eyes use a “tapetum lucidum” mirror to improve their light collection, but lose resolution with the blur of backreflected light. Chameleon eyes have great 360-degree vision and color detection with complex optics, and mobile eyes but are dependent on accommodation, not stereopsis, for depth detection.   The mantis shrimp interprets color directly with up to 15 color receptors eliminating the need for color processing, but this limits them to only 15 colors they can recognize[8].  

More animal imaging problems are solved with Computational Imaging.  The cuttlefish senses color with none of the color sensors of the mantis shrimp. Relying on an internal model of a setting, it scans its environment to integrate and compares views using chromatic aberration, motion, blur, and focus to distinguish color [9]. They compute stereopsis temporally as they move their eye laterally over their split pupil[10]. Starfish have up to 40 eyes; one peripheral sensor on each tenacle with 50-200 “ommatidia,” each with several light sensors. Individually and collectively, they guide the starfish. Centralizing ocular computation for efficient data transfer is more common. Many eyes, including ours, even have several layers of contrast enhancement circuitry floating in the eye.  The Frog eyes go further, including computation to actually produce a fly signal when a fly appears[11].

This discussion has shown a small sample of the diverse examples of sensor and lens configurations and Computational Imaging that animals have developed to work in their various environments.   Even focusing involves simple optical computation; the regular ciliary ganglion structure suggests a simple nearest-neighbor comparison that when simulated focuses a lens[12].   

Still, for all but the simplest of animals, activity requires computation of some sort for sensory integration.  The sensory integration often includes chemical, smell, taste, somatic, auditory, ocular, electrical, magnetic, and temperature information.  

Animals’ sensory variations seem to be driven by tradeoffs-based scenarios of action [13].  Still, when sensory input is equivocal animal decisions are typically dominated by their eyes. Will the eyes we make for machines simplify specific designs or explode to a gigantic variety, as animal eyes have come to?

Computational Imaging is getting exciting

Integrated human camera systems have even demonstrated that human abilities can improve computer input with facial gesture recognition[14].  Eye-gaze-based systems have been created to collaborate with hand motion to improve dexterous activity[15].  Systems like Sony Eye and Microsoft Kinect are early examples of using Computational Imaging to recognize user gestures in user Interface.  Techniques around such solutions have become emblematic of Mixed Reality headsets vendors such as Magic Leap, Microsoft, and Facebook are creating.

Industrial optical computation is now able to touch every potato chip we eat. Potato chip factories routinely move the chips by conveyor through an optical station in which every potato chip that is defective is pushed to the side for a second camera-based, automated inspection. The quantity of inspected objects at speed is apparently limitless.  In some rice packaging plants, every rice grain coming from a barge is passed under a camera and deflected to the dark, light, or reject rice bins at speed.

Today’s pervasive sensors notice what to do. Door mats started opening doors in the 1950s; today a more reliable camera door opener can replace wiring a mat. The camera-based door opener should be able to tell whether the person near the door is standing there and doesn’t need it opened, is or is authorized to open it, and even can notice that they are being stalked and alert security.   The Computational Imaging allows the camera to calibrate for different lighting conditions but also  to not open the door when the store is closed.  More importantly it can discriminate between a shopping cart left near the door or a racoon or a person trying to enter the store, or friends talking near the door.  AI models of objects and activity give it these capabilities also recognizing unusual activity such as pick pocketing or physical altercations.  Computational Imaging lets one camera serve many and also semantically discriminating purposes.

How many car fenders have we destroyed as chains came off in the winter? A wheel camera with Computational Imaging positioned to view the chains could also look at the footprint flex of the tire to tell if it’s inflated correctly. It might also spot a foreign object before it penetrates the tire. Why wouldn’t we position it to possibly see brakes overheating as well? A friend has had 4 blowouts due to tire overheating at the same corner while descending a big hill on a bike.  The picture to the right is a mountain-bike brake that failed 5 miles from our roundtable on Mt Lemmon.  Cameras and computers are now cheap enough that a wheel camera for a car or bike might cost no more than today’s car’s  low-power Bluetooth valve stem.  This example shows how a camera field can see many things with Computational Imaging models focusing on each individually to alert a user.

The Tucson Event

The roundtable included introductory team-building brainstorming exercises around a neutral area: a business model for silly cows.  We went on to a joint exercise of evaluating a proposed definition of Computational Imaging (see the Executive Summary).  A next ideation exercise enumerated and labeled ideas on yellow sticky notes that pertained to Computational Imaging. This helped immerse us in the topics and learn about each other’s perspectives.  

The yellow sticky ideation clustered contributions for business/application areas such as healthcare, automation of life, security/privacy, industrial automation, defense, observation (including human and remote imaging), and techniques such as sensors and optical design.

The stickies tried to capture our notions of available modern techniques with extended depth field sensing, light-phase measuring, time and phase-based techniques for moving beyond Nyquist resolution limits, negative-index metamaterals, non-ray-based viewing options, motion-activated event-sensing cameras, etc. 

Processing steps and image pipelines were a focus. We are used to adding contrast enhancement to imagers.  We are used to considering motion as a friend and an enemy in motion blur and in calculating distancing.  We are used to modelling images to know what is changing. Machine learning (ML) is adding example-based rather than model-based design strategies for understanding image-based sensors

A 2-hour exercise of putting yellow stickies on a wall is only a start to create the comprehensive list of application areas.  The exercise was also a roundtable launch that got the group to start even bigger explorations.

Breaks and meals allowed people to speak about their dreams and ideas more intimately. These conversations were especially effective at seeding the larger conversation.  This showed up in the animated second morning session.  We got better at calling on the experts with deep experience and perspective on specific topics.  People crosspollinated the topics and deepened the points that others hadn’t considered.  For example, one person questioned the use of feedback in image computation.  An expert articulately described several impressive and valuable examples of driving adaptive imaging.  Typical modern cameras optimize their images with serious computation.

Cameras will be used in every human endeavor.  Tools are things that allow us to do things we couldn’t do without them or can do better with them. Cameras are becoming game changers everywhere.  From monitoring sewage to suggesting more appropriate outfits for a night on the town, cameras are changing the way everything is done.  We use cameras to watch plants grow, animals move, mechanisms work, and natural and human-made structures change as they are created and also as they deteriorate. We sense inside everything we know of, starting with atomic-level imaging, and in animals, pipes, holes, underwater, and in space to improve our understanding of everything.  I especially enjoyed the discussions on the many things that can be done with computation directly on or behind a camera sensor described below.  We also talked about the coming of age of event cameras.

Edmund’s personnel-inspired outcomes

My conversations with Nitian Sampat seeded the above topic of tools which carried through the roundtable. 

We considered the value of integrated computational libraries, analytics, graphical and visualization, approaches to improve the pathways for using modern optical and computational-imaging systems. The conversation must, for example, balance the assumptions integrated into tools like Code V for developing optical systems and the need to modify those assumptions as the optical system goals change.  Today’s optical design problems include such exotic approaches as multiple layers of diffractive optics for different focal lengths and colors, tiny optical systems in glasses, etc.  How do we give designers the support to work agilely and flexibily on design goals that weren’t anticipated by the builder of the tools they use? How do we coordinate these optical designs with the computational interactions that can improve the camera?

For decades, integrating microprocessors into products seemed daunting. With Microchip PIC, Lego Mindstorms, or Arduino tools, for example, a child can now creates complete sensor, compute, effector systems in a session. Could there be an analog for making camera design more accessible?  Using optical sensing in product design has always seemed super-specialized and even more daunting. What are sensor/effector design tradeoffs? What spectra, sensors, and computation are appropriate? A simple example is instructive: a design charette for the postal service made a crash-avoiding system for mail delivery workers.  The alternatives weren’t so easy to evaluate; prototypes were built with motion sensors, optical sensors, and finally, radar was used. With the right development environment, a camera could have been designed, simulated, and procured that would have obviated much of the iterations and resulted in a less expensive and more widely-useful system. 

I grew up knowing of Edmunds as a company that introduced children to the magic of physics. Unlike other educational materials companies, the catalog business is the tip of its deep capabilities.  Edmunds helps optical engineering grownups put magic into their optical products too.  Is the Edmunds legacy education and catalog business unique branding and marketting that carries over into students’ careers and choice of an imaging solutions vendor?

The conversations about how to serve companies as a productive boutique-design house or to only do expensive development projects for large-volume customers begs the question:   how to be sure-footed in a complex and rapidly moving high-tech field.  What are the best ways to get value  from special design and testing capabilities?

My conversations with Greg Hollows deepened my understanding of how obvious it is to Edmunds that the scale of cellphone engineering has changed the technology, markets, and market value of image solutions.

Conversations with Nathan Carlie included discussion about choices that can be made for where need and opportunity is in the field, as it hugely expands its value to every industry and product.

But most of the conversation was between the other fascinating people at the roundtable.

Participant-inspired outcomes

Participants made statements that made a difference to how to see the future of Computational Imaging as it impacts camera design.

David Stork, Nitian Sampat, I, and others brought up various new ways we might be able to describe, design, and utilize a language of optical properties and optical-transfer functions.  How might Zernike polynomials help us consider new ways to characterize lenses? Will the characterization of dimensions for describing a lens inspire us in creating new ways of integrating Computational Imaging into camera design? Do we follow animal models, our dreams, or customer requests in deciding how to integrate complex approaches to match lens design to Computational Imaging?

Cameras already do much to enhance contrast and resolution in the lighting, lens, sensor, integration of successive images, image-based color balancing, model-based interpretation, etc.  As OpenCV and other tools have evolved, they have given camera users a library of computational tools to identify special objects, moving objects, posture, to predict collisions, etc.  Can our design tools help us mimic the way animals have optimized optical systems reflecting their umfeld, world view, of perception and action requirements?

Boyd Fowler described integrated circuity but also curving a sensor to reduce lens and computation complexity. Integrated camera design has and should be pushed as a fabulous tool for improving cameras and reducing computation.  An example of simplifying a camera design for image sensing is presented in the Viking 1 Mars lander camera in the early 1970s.  This camera had sensors for various spectra and scanned with two actuators. The other tipped the rotating mirror to make a sky to ground stripe, The other rotated the mirror.  In this way a 360-degree image formed as a sequence of vertical stripes rising from the ground to the sky. The speed of the spinning was synchronized to the speed of streaming the image data back to earth. The lander included a calibration test target that was also in view as the camera spun around, allowing calibration of the system.  Upon analysis of test target imagery, initial press releases of Martian soil color and spectrum had to be modified in later reports.  Integrated camera design can have immense advantages for data collection, calibration, data capture, sensory integration, data transfer, and interpretation.

Camera design should be designed to support Computational Imaging.  Leo Baldwin and I brought up the fascinating advantages that can be accrued with event cameras.  The value of their way of creating image data is based on the fact that they only produce signals when there are visual changes on their sensor.  From a robot to a vehicle, they give gigantic advantage for reducing computation and increasing resolution of anything that moves.  Seattle Laboratory of Robotics, for example, creates tools for using these to greatly improve spatial models for robots[16].  Event cameras can have better motion resolution, response time, and use less power as computation only occurs for the time or part of the image that is moving.

Feedback is driving cameras. David Brady described how CT scans and MRIs use model and sensor-based imaging to adapt the scan as it images to optimize the scanning process.

Cameras best our eyes for resolution, dark sensitivity, and sensing with motion. Boyd Fowler described many near-term game-changing technologies riding on the improvements in fabrication technology.  He described achieving a one-electron noise floor, routinely using 20-bit A/D converters in integrated cameras, creating gigapixel cameras that might fit in a phone, creating curved sensors, layers of computation built right into the camera, and more. 

Kate Medicus described being deeply engaged in one-of-a-kind multiyear, expensive, and complex satellite camera. Planning, getting, and testing the appropriate components for such a system is difficult and slow and typically requires a new and long design or manufacturing pipeline.  Boyd described how such projects shouldn’t go it alone.  He said such projects are hard for the best camera makers to service, reflected in the multimillion-dollar expense and long lead time of creating a special chip for one project.  Stakeholders that that have similar needs, for satellite cameras for example, might work together to define a sensor they can all use.  Choice of Computational Imaging features will require collaboration across organizations.

The interplay between what can be done and customer’s value will continue.  The issue will be the design-to-build timeline.  Today phones use FPGAs in place of purpose-built processors, where they can improve product development time.

Will there be hundreds or thousands of different modules that can be part of a design?  An example of a simple module is proof of authenticity for an image. The Image understanding for picture provenance has become critical and problematic.  As an example of the computational libraries that can be put in the silicon on a sensor, Boyd described an  80,000 gate module that makes trackable watermarks in camera sensor data. The super important fact that a picture is authentic will be included in cameras when OmniVision customers ask for it. 

The ideation exercise got us discussing technologies, applications and business areas. The second morning we talked more about new manufacturing opportunities.  Boyd Fowler described how modern camera design simplifies their application and utility. We are making low-cost tiny, excellent cameras.  They integrate lenses and computation.  In simple cases, the lenses can be mechanically affixed before the cameras are diced from a silicon wafer. Multielement lenses use Computational Imaging during assembly for custom calibration.  Already many sensors are affixed to an analog layer to process the pixels, before a digital layer converts the signal to 4 wires for power and communication.  Some cameras are starting to be built with a 3rd layer of onboard computer processing.  Their integrated high-speed processing capability suggests that with a little I/O capability, on-board camera processors could be the computers that run most of the world’s Internet Of Things. With billions of cameras being produced a year, the shift towards using camera’s huge parallel-sensing capability is immense.  A toaster or oven should know when something is browning, a microwave when something is boiling over, a vacuum cleaner when it is about to run over something too big or metallic. The lists of small controllers that could incorporate cameras is huge. 

Applications

Applications in our exploration will drive technology. Beyond the examples above, I include list that attempts to show a range of opportunities:

  •         How will camera solutions change as they work to help with privacy and security?
  •         Will cameras obviate the need to remember where we left anything, or even what we did? 
  •         Will Computational Imaging give a resurgence with lower latency applications? 
  •         Will hand gesture-recognizing games such as the 2003 Sony EyeToy or the Microsoft Kinect come back as inexpensive parts of every display monitor?
  •         Will images integrate with each other to form composite images from many peoples’ cameras?
  •         Will posture recording be standard wherever people play ball sports? 
  •         Will camera systems become deeply embedded in real-time critiques to help skiers, drivers, and for all dexterous activity?
  •         How will artists use Computational Imaging to create illusions and emotions in viewers?
  •         Will smart cameras replace guide-dogs for blind people?
  •         Will a sign-language interpreter be built into a camera to let a blind person read human gestures?
  •         Can inexpensive cameras record most blood values through the skin?
  •         How common will it be for camera-display pairs to replace windows in airplanes and buildings? 
  •         How soon will camera glasses improve our eyesight beyond the best optical glasses? 
  •         Will unsupervised learning allow cameras to adapt to new visual environments?
  •         Will camera-based inspection and assembly best humans in every case?
  •         Will the cameras help people better bridge the huge dynamic ranges of light as demonstrated in applications like welding?
  •         Will cameras help people better bridge the range of magnifications involved in seeing at scale and through a microscope?
  •         Will glasses make us aware of spectra that aren’t currently at our eye’s disposal?   This could help with where to stand for great WIFI and cell coverage.

Conclusions

Too often we silo our work and perspective.  This roundtable tried to counter that.  Extremely accomplished people found themselves surprised and learning from each other.  Some asked simple questions and were delighted that others could answer in the affirmative: Can computation change cameras in real time? Can Computational Imaging help calibrate lenses as they are affixed to sensors at volume?  How can changing the sensor help computation? How can customers help sensor designers create their best products?

Creating the integrated solutions of today spans physics to AI. The tools for each stage are designed for specialists that take years to learn.  On the other hand, an accessible system like OpenCV allow sophisticated solutions with simple cameras. The roundtable puzzled on how to give students and system designers the ability to reach deeper to take better advantage of and to help improve the sophisticated optical and sensor-based Computational Imaging that is possible today. Would a system or a set of computing libraries that span sensor to application accelerate where cameras are used? The members of the roundtable felt that an accessible toolchain can accelerate the industry. 

Would a system or a set of computing libraries, simulation, and visualization tools address designing systems, starting from optical sensors to application scenario creation, accelerate how and where cameras are used?  We have WISIWIG mechanical design systems; what would a WISWIG optical and Computational Imaging tool look like? Would we suddenly have undergraduates doing Computational Imaging to put cameras in baseballs, Ping-Pong paddles, frisbees, and skis for fun?

As with animals, electrical-field sensing, magnetic-field sensing, sonar, and cameras all come into play in today’s integrated camera solutions. We often choose solutions by what is easiest to implement: is there an existing solution?  Now we have many new options; we can provide libraries of existing solutions. We can make tools to help evaluate tradeoffs in solutions.

The direction of helping people easily understand design alternatives isn’t the only direction of interest.  Companies like Ruda-Cardinal make super-specialized well-engineered solutions.  Companies like Google usually have shorter design cycles to make pervasive solutions.  Companies like OptiVision and Edmunds might make subsystems that these integrators will use. 

Like COMSOL, the makers of the design tools could solicit subscribers, but that is a barrier to widespread use.  They might make it available as a way of helping people procure their solutions as the microprocessor industry does.

Academic, industrial research, and development teams are busy using Computational Imaging to improve images, but also moving cameras from being recording devices to active sensors useful in billions of places.  The trajectory is clear; sensor solutions are based on varied phenomenon but light-based cameras are a premier choice for non-contacting interpretation of environments.  Technology is advancing to marry computation resources for interpreting semantics of what can and should be done with the physical designs for cameras. Integrating computation and physical/optical design can give new and better applications, quality, power consumption, and response time. The variety of alternatives for specialized as well as versatile imaging systems might someday rival the complex and huge variety found in animals.

The 2022 Edmunds Computational Imaging Roundtable will be a cornerstone in imagining how Computational Imaging might be integrated into camera development at large.  I see roundtables like these as super productive ways to see the future we get to build.

Future Roundtables

I can imagine roundtables on various topics could be extraordinary:

  •         Deficits in popular ML approaches for Computational Imaging such as U-net.
  •         Computational Imaging for non-optical imaging arrays
  •         Computational Imaging for depth field, event, affine cameras
  •         Optics design impacts on Computational Imaging
  •         Sensor design for maximal computational flexibility
  •         Tools for function-oriented camera design
  •         Libraries for Computational Imaging value, calibration, and image improvement in camera design
  •         Simplifying camera design experiments Etc.

 

 [1] We built a thick metal cabinet to protect the laser illuminator and CCD sensor.  I  wrote this analytical program to analyses ways that wood was being wasted with a large program written in Basic running on a Data General Eclipse

[2] Ted Selker.Image-based focusing. Robotics and Industrial Inspection. pp. 96-99, Vol 360, 1983. 

[3] https://www.synopsys.com/optical-solutions/codev.html

[4] https://www.comsol.com/

[5] https://nanotechsys.com/

[6] https://opencv.org/

[7] https://www.zooportraits.com/how-do-animals-see/

[8] https://www.zooportraits.com/how-do-animals-see/

[9] Alexander L. Stubbs astubbs@berkeley.edu and Christopher W. StubbsAuthors Info & AffiliationsEdited by John Mollon, University of Cambridge, Cambridge, United Kingdom, Jeremy Nathans,113 (29) 8206-8211 May 23, 2016.

[10] https://www.shellmuseum.org/post/the-amazing-cuttlefish-eye

[11] What the Frog’s Eye Tells the Frog’s Brain * J. Y. LETTVINt, H. R. MATURANAT, W. S. McCULLOCH||, SENIOR MEMBER, IRE, AND W. H. PITTS|, Nov 1940.

[12] Ted Selker, Image-based focusing. Robotics and Industrial Inspection. pp. 96-99, Vol 360, 1983.

[13] An immense World; How Animals Sense Reveal the hidden Realms Around us, Ed Young pp.464, Bodelye Head 2022,

[14] Face Interface, John Wetzel MIT MS thesis 2007.

[15] Shumin Zhai, Carlos Morimoto, Steven Ihde: Manual and Gaze Input Cascaded (MAGIC) Pointing. CHI 1999: 246-253.

[16] Seattle Laboratory of Robotics

Acknowledgment 

This work is an overview of a workshop supported By Edmunds Scientific