Electrons

Plasmonic filters in nature — a follow-up

Just a quick follow-up on my previous post depicting surface plasmons. I came across a well prepared pop-science video about butterflies under a scanning electron microscope. Notice how pouring isopropanol over the butterfy’s wings changes the wavelength they reflect. Destin describes the “losing of color” occurring due to light not being able to penetrate the nanoholes of the butterfly, which is partly true due to the reflections by the liquid medium. However, what also happens is that the isopropanol modulates the oscillation frequency of the free unbound electrons of the “material” of the butterfly’s nanohole wing, therefore reducing/modifying the coupling between the incident photons and interfering electrons. And doubtlessly, also all sorts of other second order “ref-lec-to-rac-tive” effects. Notice the difference between the brown/blue hole arrays and their diameter.

 

 

An idea about a quick investigation that comes to mind: what if one measures the energy of the reflected (filtered) light back and compares it with the energy coming from incident light for the very same filter bandwidth? How efficient are the butterflies’ nanohole arrays compared to man-made ones? Most likely the answer is not that straightforward, as man-made filters are designed for optimized transmission coefficient, while butterflies use nanohole arrays to reflect light to attract/protect themselves to/from other species. It may also be highly likely that there’s already tons of investigations conducted on the butterfly metamaterial topic.

One last thing that I came across some time ago. Similar nanohole patterns are observed in when anodizing aluminium and etching it consecutively with e.g. a fine ion etcher. Here’s a preview on the topic: A visible metamaterial fabricated by self-assembly method.

Advertisements

Plasmonic filters — the deus ex machina of the century

Today’s semiconductor news on Spectrum present an article about plasmonic color filters “Flexible and Colorful Electronic Paper Promises a New Look for eBooks” by Dexter Johnson. Coincidentally, last week I had a hot discussion with a few physicists on the same topic, so I thought I’d introduce this animal here and also state my personal engineering view on the topic.

Act one: introduction

In a very broad sense, surface plasmons are free electron oscillations occurring between a metal and a dielectric, which are excited by a light-metal-dielectric interaction. The unbonded to any atom oscillating electrons can be created by incident photons or electrons falling on the junction between the two materials. The frequency of their oscillations depend on the junction thickness itself, as well as the distance to neighboring pairs of oscillating electrons. When light strikes a plasmonic material, apart from exciting free electrons, it also couples to these (which actually form a kind of surface electromagnetic field), and thus creates a self-sustaining interference phenomenon. The key feature of this concept is that only photons with specific energy can couple with the oscillating electrons, while the rest pass though, hence, this process can be naturally used as a color filter.

Such metamaterials have been defined theoretically in the mid-50s but they only become very popular during the past ten years due to the rapid improvement in lithographic techniques. They could be used in applications ranging from display technology and image sensors, which is why it has just recently been approached by the big players in the chip fabrication field. Both of these technologies need some kind of light filtering element, and both of them now are using organic color filters to achieve their goals. The problem with organic color filters is that they are complicated to produce and quickly degrade with time, especially when UV and high temperatures are involved, as is the case with the die of an image sensor.

Act two: the complication

Plasmonic color filters are physically formed by a sandwitch made of metallic bread (usually Tungsten or any of the noble metals) and a dielectric butter (SiO2 or eqv…). This sandwitch is also accurately bitten by rats creating a superfine nanohole array which looks like this:

Basic structure of a generic plasmonic color filter

Basic structure of a generic plasmonic color filter

At first glance such a technology looks very silicon friendly as all we need to create our filter structures is the addition of two extra metal layers to the CMOS process, sounds like a piece of cake. But why did the big semiconductor players decide to abandon this scheme as soon as they had a sniff at its surface? Plasmonic color filters are still very experimental and I think we can not identify them as, even an immature technology yet. Although there are quite a few academic groups working on the problem, the prospects for production on a mass scale currently seem miraculous. But hey, I am very happy when I see progress on the topic folks! We’ve seen it many times, many advancements in history have been the outcome of scientific mambo-jumbo once labeled as absurd or strange.

It has been identified that plasmonic filter structures have an excellent bandpass quality factor dwarfing out even the best organic compound color structures ever reported. The Q-factor, however, is not the only element in the picture. The transmission coefficient of the best reported plasmonics is in the order of 0.2-0.3 which is very disappointing. The filter’s response is also not very steady — towards UV and deep UV full transmission is usually observed. Nevertheless, perhaps this could be solved with an extra glass UV filter. But still, can we not use them for accurate light spectrometers, where light is of abundance and low transmission coefficients are affordable?

Act three: climax

Well, here is where engineering comes into play and destroys everything, for the time being… Plasmonic filters currently rely on an extremely accurate lithographic process called electron beam lithography which, combined with dry plasma etching has an accuracy in the order of few nm. Apparently, even that is not enough to create a filter with a good yield. All reported plasmonic filters, including this morning’s popular science article in Spectrum, are manufactured on a scale of a few micrometers. I.e. as the authors of the paper suggest, just a pixel of the retina display of an iphone. Using e-beam lithography in mass production is a fairytale. Gold is a forbidden word in semiconductor fabs, so this material as a filter falls out as well. Solving optical crosstalk problems and alignment in adjacent filters for RGB arrays seems like another wonder to me. How about the fact that the microlenses deposited on top of the filters are polymer which is another dielectric and as the theory suggests — will cause a modulation of the surface plasmon resonance. Or the non-uniformity of the dielectric’s thickness? All these issues create a highly non-linear outcome. Engineers don’t like non-linearities… neither does large scale production.

Act four: resolution

The outcome of this drama is still very unforseeable and here I let you cast the die. One thing is certain — no matter the aftermath — a generation of scientists is gaining momentum along with the new era of metamaterial sciences. However, for the time being, whenever somebody starts talking about how plasmonics will change the world in a year time, I just smile and listen carefully.

ATLAS silicon strip detectors and charge sensing

Some time ago, scientists at the Large Hadron Collider (LHC) at CERN reported the potential discovery of a new fundamental particle, which does not fit anywhere in the standard model of physics. According to “the news”, the latest data from ATLAS and CMS (LHC’s two largest detectors) shows two unexpected data “bumps” from the usual gamma-ray flashes, which are correlated and acquired from two separate detectors. According to physicists, this may point to the existence of a particle that dwarfs by light-years even the recent discovery of gravitational waves.

It is not yet sure if this measurement data would get confirmed or rejected, but the latest news point that the significance of the results is fairly low, owning a sigma of 1.6 approximately. That fact inspired me to write a bit about the basics of the basics in silicon strip detector charge sensing, which is a stone age technology in commercial light sensing CMOS image sensors nowadays.

So what are strip detectors and how are they used? These are basically PN-junctions with an extremely wide aspect-ratio and, as their name suggests, look like strips. Here’s a sketch:

A bird's eye view of silicon strip detectors

A bird’s eye view of silicon strip detectors

These strips usually share an N-type substrate while each is P+ doped, covered by aluminium with some extra insulation layers in between. The LHC scientists are interested in observing interference patterns in X and gamma rays caused by the decay of the sought after particles. Apart of their intensity, what also interests them is the spatial trajectory of the high-energy rays. In order to detect the 2D-position of the gamma rays, they have invented a very clever strip array configuration. Let me explain, here’s another sketch:

Particle incidence angle detection using parallel strip configuration

Particle incidence angle detection using parallel strip configuration

A falling particle would have a higher probability of generating electron-hole pairs in the strip which is crossed by the X-ray photon, which already creates a kind of a 1-dimensional readout. To obtain the angular information, the adjacent strips could also be read-out and a particle correlation can be reconstructed. In other words, if the gamma ray happens to fall with some angle of e.g. 45 degrees, it will thus generate electron-hole pairs in two or three adjacent silicon strips. This gives us already almost 2D particle trajectory information. However, CERN engineers have decided to expand the technique even further, by adding another cross-pair of detectors underneath the upper set:

Hybrid X- and Y- direction parallel strip sensor configuration

Hybrid X- and Y- direction parallel strip sensor configuration

That way not only they can extract position and angular information in the x-direction, but also the y-direction, which, by using some post-processing provides accurate particle intensities and trajectories. But how can these PN silicon strips be read out?

The simplest method in reading out thousands of strips, is the use of an integrated charge amplifier and digitization electronics per each channel. Charge sensitive amplifiers have not been very “widely” used in the past with passive pixel CMOS image sensors, and have proven to be very suitable for single detector readout. These are still used in single-line CMOS line scan sensors due to their low-noise capabilities for low detector capacitance.

Typically, operational amplifier-based integrators using an integrating capacitor in the feedback are a commonly used scheme which is sketched below:
A basic charge amplifier topology for strip sensor readout

A basic charge amplifier topology for strip sensor readout

These amplifiers have high input impedance, they integrate weak charge pulses and convert them into voltage pulses for amplification and then buffer the output for readout from the next block in the chain. Because of that operation, this type of amplifier is called a “charge amplifier”. The first stage of a charge amplifier is usually a low-noise differential pair and its open-loop gain is set sufficiently high so that its amplification is not influenced by the detector capacitance which reduces the gain in the feedback. The output stage is a low-impedance buffer so it could drive the next circuits in the chain, typically an S/H stage of an ADC.

When particle decay rays strike the silicon strips, signal charge pulses Qs are generated, with an amplitude proportional to the particle energy. Due to this charge generation, the input potential of the charge amplifier lifts up and during the same time, a potential with reverse polarity appears at the output, due to the negative feedback amplifier. However, because the amplifier’s open-loop gain is sufficiently large, its output potential works through the feedback loop so that it causes the input terminal’s potential drop to zero, after some settling time dependent on the unity-gain bandwidth of the opamp itself. As a result, the signal charge pulses Qs are integrated to the feedback capacitance Cf and the output’s voltage changes according to the integrated charge. At that moment, since the feedback resistor Rf for DC is connected in parallel to the feedback capacitor Cf, the output voltage slowly discharges with the time constant determined by τ=Cf · Rf. The output voltage of such a charge amplifier scheme is dampened by the size of the feedback capacitor Cf, thus Qs and Cf must be chosen wisely to fulfill the specifically desired dynamic range. As a result it can be observed that the noise performance and dynamic range of this readout scheme is of highest trade-off. Increasing the dynamic range, leads to a lower swing on the capacitor and hence increases noise, the reverse is also applicable.

Note that the ATLAS detector has a total of over 200 m2 (square meters!!!) of pure detector strips! With a strip size of 0.01mm by 40cm we get a pretty decent number of about 50 000 strips and readout channels respectively. With such a huge set of sensors both ATLAS and CMS rely on the statistical significance of their measurements and the weird correlation in the slight gamma peaks, might truly be caused by a completely new fundamental particle. However, the readout complexity of such an enormous set of sensors is colossal, which makes induction of errors a plausible explanation as well.

Fingers crossed that all the sensing electronics works flawlessly and that all abnormal peaks detected are due to a newly detected particle.

 

Chiseling out The Chip!

This post may be a bit redundant with the info I added in the other place, but I am excited, so I felt the need to rewrite some of it here.

Le Chip! This work took a while. To celebrate, I thought it deserves a few words in the blogs. During the past year or so, I was/have-been/will-continue-to-be working on an image sensor ADC testchip. It was finally taped out yesterday! What’s left now is some additional gastronomical work on the tapeout cake and the drainage of a rusty bottle of champagne.

The chip in all its ugly majesty with all these redundant power pads and LVDS pairs.

The chip in all its ugly majesty with all these redundant power pads and LVDS pairs.

The core of the testchip is a fast 12-bit column-parallel ramp ADC at 5u pitch, utilizing some special counting schemes to achieve the desired 1us ramp time at slow clock rates. Alongside, to be able to fully verify the pipelined CDS functionality and crosstalk, I’ve built a pixel array in line-scan configuration, some fast LVDS drivers, clock receivers, references, state machines, a few 8-bit iDACs, bond pads, ESD, and some other array-related stuff, all from scratch! The chip has a horizontal resolution of 1024 and 128 lines with RGBW color filters and microlenses.

On the top-left corner there are some experimental silicon photomultipliers and SPAD diodes. These I plan to measure for fun and I promise to post the results in any of the two blogs.

Unfortunately, this chip wouldn’t yield tons of publicaiton work, apart from the core ADC architecture and comparator. To test the ADC one needs a whole bunch of other fast readout blocks, which in the end are not something novel, but yet, one needs them and designing these takes time. Finishing up this test system was a lot of work and I realize that it might be a bit risky and ambitious to be doing this as part of a doctorate. What if it fails to work because a state machine had an inverted signal somewhere? Or the home-made ESD and pads suffer from latch-up? Or the LVDS driver CMFB is unstable and I cannot readout data out? Or there is a current spike erasing the content of the SRAM? Or, or, or ?

We university people don’t have the corporate power to tapeout metal fixes twice a month until we’re there. I probably have another two or three chip runs for my whole doctorate. It may therefore be better (and more fun) to stick with small but esoteric modules, which one can verify separately and have time to analyze in detail. But hey, I’ll quote a colleague here: “It is what it is, let’s think how we can improve things.”

Finally, I have added this little fella who I hope will be my lucky charm.

Le Duck!

Mr Le Duck!

With his 15um of height, could he compete in the annual “smallest duck on the planet” contest? Cheers!

Random “stuff” about silicon photomultipliers and avalanche diodes

This writing may be considered as a very random continuation of an older post briefly mentioning the usage of photomultiplying tubes for gamma spectroscopy, but the truth is that it is so random that could be impossible to follow. I have some spare 300x600um silicon space and have been recently thinking how to fully utilize the area on the chip I am about to tapeout. Apart from having a free corner of silicon, unfortunately, time is not infinite nor free either (what?) and I have just about a week to think, analyze and engineer whatever it would be. This is why I am also posting this, hoping that in the process of writing/re-reading, I would suddenly get that brilliant idea which would vanish all first world problems and get me through academia obscura.

Back to the point. Some 5 years ago, the image sensors field suddenly realized that instead of trying to integrate the current/electrons from photo-electron pairs in the form of a stored charge on a capacitor, one could initiate an ionization process triggered by a single photon in a very high/concentrated electric field PN junction, a process often referred to as impact ionization. The last is essentially the same as the photomultiplying effect in vacuum tubes, now with the difference that the medium where the process occurs is silicon, plus some additional second order effects. The old, photon-hit-electron-integrate-readout, is a technology which has been here for decades, and still impresses us with its mature-immatureness. How is impact ionization a better solution than the conventional? The answer is – it’s not (for now), and it depends.

Here is my simple explanation of why impact ionization detectors, also called Single Photon Avalanche Diodes (SPADs) should theoretically perform better under low light conditions than the conventional technologies. In electronics, we often use the Friis formula, which states that to minimize the noise figure in a signal chain, we should apply gain in the system as early as possible, and/or perform the less noisy operations first:

F_{total} = F_1 + \frac{F_2-1}{G_1} + \frac{F_3-1}{G_1 G_2} + \frac{F_4-1}{G_1 G_2 G_3} + ... + \frac{F_n - 1}{G_1 G_2 ... G_{n-1}}

It is very intuitive, and it could be applied to the signal chain of an image sensor too, even though that the signal chain begins with a noisy source of photons (photon shot noise), distorted by microlenses, attenuated by color filters, converted to electrons with noise, electrons to voltage (?) and voltage to a digital number. Most ultra low-noise CMOS image sensors use the so called High Conversion Gain (HCG) pixels. In simple language, this means that their integration capacitor (FD – floating diffusion) is minimized as much as possible compared to the photodiode junction capacitance. This results in a larger voltage swing on the integration capacitor (FD) per hit photon, which is basically equivalent to maximizing the gain at the very beginning of the photon-electron conversion process. Remember the Friis formula?

Why does a SPAD look promising for ultra-low-light imaging? Avalanche photodiodes, basing on impact ionization have an enormous gain, thus, a single photon can push the trigger causing the diode to hit the rail. Makes life easier for the rest of the measurements too, instead of a complex ADC, we can just use a counter. Sounds brilliant, however, there are some difficulties which prevent us from reaching perfect photon counting. Here’s a small list which I am thinking about right now:

  1. The gain in a SPAD may be considered infinite, according Friis the output should be noise-free. However, SPADs, for now, are triggered not only by photons, but also by random thermal excitation and defect traps, sudden releases and gamma ray impact with electrons. The main quality parameter of a SPAD is its so called Dark Count Rate (DCR), or false triggers per time unit under dark conditions. This is a very primitive measurement, however, until now there is no good method for quantifying what part of the DCR is caused by the respective aforementioned side effects.
  2. After initiation of avalanche breakdown, in order to arm the diode for another measurement round, we need to cut its power supply and then gradually apply high reverse bias voltage again. The time used for the operation is called reset time. This reset (dead) time is the major obstacle for achieving full single photon detection for high light intensities.
  3. SPADs work under high reverse bias voltage conditions which makes them a hard to integrate with readout electronics on 1v2/3v3 CMOS processes, while still keeping a low reset time contribution from the readout.
  4. SPAD structures can be easily implemented in standard CMOS and this has been done by a number of research teams during the past 5 years.
  5. Most of the researchers are working on SPADs for Time of Flight (ToF) imaging, or ultra-low-light sensing.
  6. Most of the research is done on multi-channel readout, which is actually the way to go, but is very challenging in a 2D process (no 3D stacking).
  7. Can we use multiple arrays, but have a single readout?
  8. Do we now have access to hundreds of PM tubes on a single chip? What could we use these for?

Access to hundreds of “PM tubes” on a single chip? – we call that a silicon photomultiplier (SiPM). Such have existed for a long time and are offered as discrete components, however, the information from each PM is hard to measure, all PMs share the same bus and we get a very difficult for measurement output current. Difficult, in a sense that is noisy and hard to distinguish. Conventional readout of SiPMs integrate and low-pass filter the output current before performing measurement/digitization. To grasp what I am referring to, here’s a simple electrical equivalent diagram of a SiPM:

Silicon Photomultiplier equivalent diagram

Silicon Photomultiplier equivalent diagram

You essentially see a number of SPAD diodes with passive quench (the resistor in series) which acts as an automatic reset. When the diode fires the resulting high in-rush reverse bias current causes voltage drop on R (and the SPAD’s cathode respectively) which acts as a feedback mechanism and prevents the avalanche breakdown from continuing, thus resetting the diode by gradually increasing the cathode voltage. Typically SiPMs on the market have passive quench (resistor in series) with the SPADs, the latter are connected in parallel an could reach a relatively large number in the order of 100-1000s. The order depends on their dark and firing current and well as desired photosensitivity. To help you get an idea of how a SiPM should look physically, here’s an example of a small 8×8 SPAD array in SiPM configuration I just sketched in Virtuoso:

An 8x8 diode Silicon photomultiplier array layout diagram

An 8×8 diode Silicon photomultiplier array layout diagram

The circular shape of the junctions comes from the fact that we want to have a strong electric field around the junction which should make the diode more susceptible to avalanche breakdown. Ideally it should be entirely circular, in the case above I’ve used hexagonal shape as this CMOS process does not alow other than 45/90 degree angles. Using hexagonal diodes generates stress electric field points which increases the dark count rate. One possible structure for SPAD formation in standard triple-well CMOS is a junction between a NW/DNW and a local P-Well created over the Deep N-Well. The latter local P-Well has the shape of a doughnut and acts as an electric field concentrator. Here’s a cross section sketch:

SPAD in CMOS cross-section

SPAD in CMOS cross-section

The thickness of the P-Well doughnut determines the strength of the electric field imposed from the N-Well surrounding doughnut to the P+ active N-Well junction. The multiplication area is formed under the island in the center of the diode. The surrounding material around the active area can be covered with top metal layers to block light and prevent PE pair stimulation outside of the intended junction. Typical SiPMs include a poly quenching resistor which surrounds the SPAD and typically have rectangular shape. In standard CMOS however, apart from the passive quench methodology, we can do all sorts of active quench circuits. What if we combine those in a SiPM? Would such a combination make the integrated current measurements easier?

The last questions remain open, likewise this random post too. Let’s see what I may come up with in the next few remaining days and hope that there would be a follow-up post containing some experimental results.

 

Oh, by the way, if you want to read an excellent introductory material on SiPMs, check out this link.

Energy in a lightning strike

Summertime in Bulgaria is often befallen by thunderstorms involving a large number of lightning strike hits towards the ground, causing damages every year . This made me think about the energy stored in an average lightning bolt. What if we can capture and store it using a “lightning bolt farm”? Would it solve the world’s energy problems? Even Doc Brown used lightning bolts to power the De Lorian so that Marty can get back to 1985. Sounds as a promising energy source, but is it? Here are some of my very primitive thoughts:

According to some online sources an average lightning strike has an energy of 0.5 to 5 Giga Joules [1]. This energy of course, is released for time in the order of microseconds and capturing it is difficult and not the scope of this post. Let’s say that we can capture and store a 2.5 GJ lightning bolt. How much is this and would it be enough? According to Wikipedia the average energy density of coal is roughly 24 Mega Joules per kilogram. This yields roughly 100 kg of coal per lightning bolt. A few dozens of lightning bolts per storm brings us about 1 Ton of coal, not bad…

Assuming a 100 % capture efficiency ten lightning bolts can potentially bring us:

E_{kWh} = \frac{2.5 GJ}{3600000} = 694 kW/h

Comparing it to Bulgaria’s one and only nuclear power plant the ten lightning bolts’ energy forms about 0.03 % of its total power capacity as of today. Pretty low, hmm, so we need to capture more of them, just about 30 000 lightning strikes to achieve the same energy capacity… every hour! And this is to cover only one nuclear power plant, which is by far not enough even only for Bulgaria’s needs.

How much crude oil barrels are there in a lightning bolt?

One barrel of crude oil has an equivalent energy of 6 GJ/barrel. So, 1/2 barrel/lightning. The daily petrol consumption in the United States according to their U.S. Energy Information Administration for 2014 is 20 Million barrels per day. Thus, we need about 40 Million lightning strikes per day to satisfy America’s petrol needs. I wonder if EIA’s information can be trustworthy, but 40 M lightning strikes have an equivalent of about 1000 Bulgarian nuclear power plants used at full capacity for one hour every day in the US? On the other hand, it might not be that surprising. America has a population of about 300 million, even if half of its residents drive their cars about 50 miles every day, this is some 750 Million miles/day or in human units 1207 Million km/day. An average car burns 6l/100km and from there we get the number of 2 Million litres of, say gasoline. Assuming that with cracking processes [5] we can distil 20% gasoline out of a unit crude oil, we roughly reach the reported number of 20 Million barrels/day.

With the last fact it becomes apparent that humanity is not going to be powered by lightnings, at least not in the near future, however there is some ongoing research on the topic [2],[3],[4].

Last, a fun assumption. How many electrons are there in an average lightning strike?

2.5 Giga Joules converted to electron volts is:

\frac{2,5 GJ}{q} = \frac{2,5 GJ}{1,602.10^{-19}} = 1,56.10^{28} eV

If we assume that the 2.5 GJ work needs to move the electrons on a 1 Mega Volt potential (between the cloud and the ground), this yields:

\frac{1,56.10^{28} eV}{1 MV} = 1,56.10^{22} \text{ electrons}

or, about 15 Zetta electrons.

Not a very meaningful comparison, but the average full-well capacity of a pixel in a standard CMOS image sensor is 20 000 electrons. 🙂

 

References:

[1] Yasuhiro Shiraishi; Takahiro Otsuka (September 18, 2006). “Direct measurement of lightning current through a wind turbine generator structure”. Electrical Engineering in Japan 157: 42. doi:10.1002/eej.20250. Retrieved 24 July 2014.

[2] Bhattacharjee, Pijush Kanti (2010). “Solar-Rains-Wind-Lightning Energy Source Power Generation System” (PDF). International Journal of Computer and Electrical Engineering 2: 353–356. doi:10.7763/ijcee.2010.v2.160. Retrieved March 20, 2014.

[3] Knowledge, Dr. (October 29, 2007). “Why can’t we capture lightning and convert it into usable electricity?”. The Boston Globe. Retrieved August 29, 2009.

[4] Helman, D.S. (2011). “Catching lightning for alternative energy”. Renewable Energy 36: 1311–1314. doi:10.1016/j.renene.2010.10.027. Retrieved March 5, 2013.

[5] James. G. Speight (2006). The Chemistry and Technology of Petroleum (4th ed.). CRC Press. ISBN 0-8493-9067-2.

Applied image sensors – EMCCDs and yeast cells

Finally, after a long break I am back!

We had exceptionally good weather here last weekend and I spent some time in the university parks with a friend who studies at the biochemistry department. Throughout our walk I had the chance to learn a bit more about the DNA replication mechanisms in eukaryote cells and the many things that can go wrong during cell replication. Our discussion gradually hopped over to commenting on the methods of imaging live organisms and eventually the imaging sensors used in fluorescence microscopy.

Anyway, I need to first say a few words about the essence of their problem before hopping over to the electronics part. DNA is a form of amino acid which is, as we all know, unique and contained in each live cell. Apparently it is also a “single molecule” in terms of quantity in each cell i.e. we can’t “suck” a bit of DNA from one cell and inject it into another. If we want to produce more DNA, or cells, we need to split the amino acid into two strings of RNA. The latter we then combine together in the new cell thanks to a comb-shaped (cell element, forgot its crazy name?) which does the magic. This process is very sensitive to external factors (as an ordinary person I imagine poison), especially sensitive towards the end of the process for which very little is known about. If we have a way to snapshot this process in extreme detail, only then we can fully understand the replication mechanism. But let’s not drift further as all above is extremely vague from a biologist’s eye and get back to the topic.

Apparently biochemists know very very little about these processes and all their experiments seem to be based solely on the “trial and error” methodology. As by today we do not have technology for 3D imaging of individual cells, biochemists are forced to use what is called fluorescence microscopy. This also forms the essence of this post – what are EMCCDs and how are they used in imaging of live organisms?

Before taking this question let me explain why fluorescence microscopy is used in biology and why other microscopy technologies are not suitable for cell imaging purposes. Imagine taking a live cell to a Scanning Electron Microscope (SEM), no, even better, imagine if you were the cell to be scanned. So you are closed inside a vacuum chamber and a gun is pointed towards you whilst it is also constantly shooting electrons with over a few keV energy. You would have a hard time getting out of the vacuum chamber alive, not to mention that the biologists want to see you replicating while they are scanning (shooting) you. It simply is not possible. Cells which have otherwise been alive die after the very first electrons piercing through the cell’s membrane. Scanning Tunneling Microscopy is also impossible, as the latter focuses on atomic scale and this time current is passed through the sample. Our cells won’t feel very good, even if it was possible to adjust the STM needle with such a precision. The passing current would kill the cells immediately. A technique which used for increasing the resolution of otherwise “standard” microscopes is the so called fluorescence microscopy technique.

Fluorescence microscopy extends our ability to image the “internals” of live cells. If a fluorescing marker is injected into our specimen, by exposing it to UV light we can observe a different wavelength emission by the former. We can therefore focus our imaging on very specific “internals” of the cells. Ordinary microscopy techniques focus on the specimen’s surface imaging and/or light absorption. The fluorescence method however, did not make any sense until recent years, as the relatively low quantum efficiency of the fluorescing markers make it difficult for the sample to be imaged. Not to mention that a naked eye is nearly not as sensitive as the sensor needed here. With the emergence of more light sensitive image sensors in the recent years, fluorescence microscopy is now a widely spread technique amongst scientists.

So what are these sensors, that have an up-to single photon sensitivity? Most fluorescence microscopes are more likely to have Electron-Multiplying Charge-Coupled-Device (EMCCD) sensors, however some systems employing Time-Delay-Integration (TDI) sensors can also be found.

EMCCDs do not differ dramatically compared to conventional CCD imagers, however what makes a difference between the two is the structure of the output register. We know that in general the readout noise from interline transfer CCDs can be very low, or less than ~3 to ~10 $e^{-}$. In order to decrease the relative noise floor added by the output amplifier to the signal, EMCCDs use gain registers to boost the amount of electrons (signal) to be read out. Thus the same magnitude of readout noise (3-10 $e^{-}$) as in regular CCDs is superimposed on a much larger charge count, the one achieved thanks to the gain register. Let’s have an overview of the charge transfer process occurring in CCDs.

CCD charge transfer principle diagram

CCD charge transfer principle diagram

Phase 1 and 2 are slightly overlapping and applied to two adjacent electrodes. As the electrostatic field is gradually changing from one electrode to another the trapped electrons in the wells move along the direction of the changing field. For more details I suggest reading more about CCDs. During the charge transfer process there is a slight chance that only a few electrons escape (sometimes in high-performance sensors even ~1 e-).

This is all great, but when we try to take an image at very low light level conditions, the output amplifier should practically read one, or only a few electrons. All amplifiers induce noise to the signal, thus a single electron would be hidden inside the dominating electronic noise by the amplifier. A way to solve this problem “naturally” is to use a multiplying CCD register. Here is a crude sketch of a CCD register with an emphasis on the number of charge to be read out at low light levels:

CCD readout with an emphasis of electronic noise addition from the output amplifier at low light levels

CCD readout with an emphasis of electronic noise addition from the output amplifier at low light levels

The signal is masked over the amplifier electronic noise. Here is a sketch of an EMCCD:

Principle diagram of an EMCCD

Principle diagram of an EMCCD

The phenomenon is self-explanatory, but how do we make a multiplying register? Here is a principle diagram:

Charge multiplying impact ionization register

Charge multiplying impact ionization register

Just as the regular output register the same structure is used, however now a few clock phases are introduced and the electric field applied over the electrodes is much stronger. The strong electric field causes accelerated electrons to hit other electrons in the empty register cells exciting a new hole-electron pair. The process continues and as the charge is shifted through, additional electrons are generated, and thus the name “multiplying” register. The gain of a single register is small, but with the increase of stages ionization probability is increased with P(ioniz) = 1+P(cell)^{N}. Where N is the number of stages and P(cell) the impact ionization probability per cell. The exact multiplication value is hard to be defined as the number of incoming electrons and the strength of the electric field cause very stochastic levels of impact ionization. Nevertheless, this amplification method is “noiseless” as compared to the regular charge-voltage conversion  and amplification in the output amplifier. The overall SNR of the system is dramatically increased.

Focusing on the microscopy technique, now having a single/few photon sensitivity sensor, we can go back to the yeast cells and image them alive. Here is a resulting image:

Live yeast cell image using fluorescence microscopy

Live yeast cell image using fluorescence microscopy

Why yeast cells? I was told that these are one of the largest eukaryote cells and have low light absorption though their membrane. Here it is how I saw them otherwise:

Prepared cell samples

Prepared cell samples

And the microscope used can be seen below. I could not detach the sensor due to obvious reasons, but it really is a regular microscope with some electronic eye “features”.

Fluorescence microscope

Fluorescence microscope

To wrap-up, I am extremely happy to see a real example of two sciences with a strong bond in-between. Deeply involved in our fields it is often that we forget what our research is all about.