# Nonlinear Dynamics and Chaos in Gas Discharge Tubes

I’ve been procrastinating heavily lately with high voltages and some noble gas tubes lying around. Seen below is some footage I took of a xenon flash tube connected to a high-voltage inverter salvaged from the LCD of a scrap laptop.

As the current provided by the DC-DC converter is by far not enough for a strong (and potentially blinding) discharge, this allowed me to stay still and look at the weak discharge for a long time. Eventually, I got to play a bit with the mini-lightning and noticed that the elecromagnetic fields induced into my body from the noisy environment are radiated back to the flashlamp which causes the discharge to flicker. These flickers reminded me of Chua’s circuit and the various fractal chaotic attractors.  What if one uses an ultra-high speed camera to capture the chaotic trajectory and potentially fractal behaviour of the discharge? Notice how the discharge has a few very stable modes and switches between them before it goes completely at random. Also note that even when in chaos mode, the arc still has a higher probability of following specific paths in space than others, which is another feature of Chua’s and Rössler’s circuits. That behaviour also strongly reminds of chaos diagrams as the ones below:

From order – to sub-order – to chaos, source: wikipedia

I dug an intriguing paper from 2015 on “Complex dynamics of a dc glow discharge tube: Experimental modeling and stability diagrams”. The group shows detailed statistical light intensity diagrams from a similar discharge occurring in a Neon sign glow tube. They have also derived the bifurcation diagrams of the glow discharge and have concluded that it somewhat resembles a Hénon chaotic attractor.

Although these kinds of studies have little practical applications, it is worth doing them just for the sake of the coolness of the project. Hmm, I have an old multimode HeNe gas laser tube lying around, maybe I could try hooking it up to a stronger-ish HV source, make it oscillate between modes, place a photodiode at the beam and start capturing intensities to reconstruct its fractal diagram. I am sure lots of experiments have been conducted on this, but still, it is cool and has merit for a potential geeky Christmas project.

# Plasmonic filters in nature — a follow-up

Just a quick follow-up on my previous post depicting surface plasmons. I came across a well prepared pop-science video about butterflies under a scanning electron microscope. Notice how pouring isopropanol over the butterfy’s wings changes the wavelength they reflect. Destin describes the “losing of color” occurring due to light not being able to penetrate the nanoholes of the butterfly, which is partly true due to the reflections by the liquid medium. However, what also happens is that the isopropanol modulates the oscillation frequency of the free unbound electrons of the “material” of the butterfly’s nanohole wing, therefore reducing/modifying the coupling between the incident photons and interfering electrons. And doubtlessly, also all sorts of other second order “ref-lec-to-rac-tive” effects. Notice the difference between the brown/blue hole arrays and their diameter.

An idea about a quick investigation that comes to mind: what if one measures the energy of the reflected (filtered) light back and compares it with the energy coming from incident light for the very same filter bandwidth? How efficient are the butterflies’ nanohole arrays compared to man-made ones? Most likely the answer is not that straightforward, as man-made filters are designed for optimized transmission coefficient, while butterflies use nanohole arrays to reflect light to attract/protect themselves to/from other species. It may also be highly likely that there’s already tons of investigations conducted on the butterfly metamaterial topic.

One last thing that I came across some time ago. Similar nanohole patterns are observed in when anodizing aluminium and etching it consecutively with e.g. a fine ion etcher. Here’s a preview on the topic: A visible metamaterial fabricated by self-assembly method.

# Plasmonic filters — the deus ex machina of the century

Today’s semiconductor news on Spectrum present an article about plasmonic color filters “Flexible and Colorful Electronic Paper Promises a New Look for eBooks” by Dexter Johnson. Coincidentally, last week I had a hot discussion with a few physicists on the same topic, so I thought I’d introduce this animal here and also state my personal engineering view on the topic.

### Act one: introduction

In a very broad sense, surface plasmons are free electron oscillations occurring between a metal and a dielectric, which are excited by a light-metal-dielectric interaction. The unbonded to any atom oscillating electrons can be created by incident photons or electrons falling on the junction between the two materials. The frequency of their oscillations depend on the junction thickness itself, as well as the distance to neighboring pairs of oscillating electrons. When light strikes a plasmonic material, apart from exciting free electrons, it also couples to these (which actually form a kind of surface electromagnetic field), and thus creates a self-sustaining interference phenomenon. The key feature of this concept is that only photons with specific energy can couple with the oscillating electrons, while the rest pass though, hence, this process can be naturally used as a color filter.

Such metamaterials have been defined theoretically in the mid-50s but they only become very popular during the past ten years due to the rapid improvement in lithographic techniques. They could be used in applications ranging from display technology and image sensors, which is why it has just recently been approached by the big players in the chip fabrication field. Both of these technologies need some kind of light filtering element, and both of them now are using organic color filters to achieve their goals. The problem with organic color filters is that they are complicated to produce and quickly degrade with time, especially when UV and high temperatures are involved, as is the case with the die of an image sensor.

### Act two: the complication

Plasmonic color filters are physically formed by a sandwitch made of metallic bread (usually Tungsten or any of the noble metals) and a dielectric butter (SiO2 or eqv…). This sandwitch is also accurately bitten by rats creating a superfine nanohole array which looks like this:

Basic structure of a generic plasmonic color filter

At first glance such a technology looks very silicon friendly as all we need to create our filter structures is the addition of two extra metal layers to the CMOS process, sounds like a piece of cake. But why did the big semiconductor players decide to abandon this scheme as soon as they had a sniff at its surface? Plasmonic color filters are still very experimental and I think we can not identify them as, even an immature technology yet. Although there are quite a few academic groups working on the problem, the prospects for production on a mass scale currently seem miraculous. But hey, I am very happy when I see progress on the topic folks! We’ve seen it many times, many advancements in history have been the outcome of scientific mambo-jumbo once labeled as absurd or strange.

It has been identified that plasmonic filter structures have an excellent bandpass quality factor dwarfing out even the best organic compound color structures ever reported. The Q-factor, however, is not the only element in the picture. The transmission coefficient of the best reported plasmonics is in the order of 0.2-0.3 which is very disappointing. The filter’s response is also not very steady — towards UV and deep UV full transmission is usually observed. Nevertheless, perhaps this could be solved with an extra glass UV filter. But still, can we not use them for accurate light spectrometers, where light is of abundance and low transmission coefficients are affordable?

### Act three: climax

Well, here is where engineering comes into play and destroys everything, for the time being… Plasmonic filters currently rely on an extremely accurate lithographic process called electron beam lithography which, combined with dry plasma etching has an accuracy in the order of few nm. Apparently, even that is not enough to create a filter with a good yield. All reported plasmonic filters, including this morning’s popular science article in Spectrum, are manufactured on a scale of a few micrometers. I.e. as the authors of the paper suggest, just a pixel of the retina display of an iphone. Using e-beam lithography in mass production is a fairytale. Gold is a forbidden word in semiconductor fabs, so this material as a filter falls out as well. Solving optical crosstalk problems and alignment in adjacent filters for RGB arrays seems like another wonder to me. How about the fact that the microlenses deposited on top of the filters are polymer which is another dielectric and as the theory suggests — will cause a modulation of the surface plasmon resonance. Or the non-uniformity of the dielectric’s thickness? All these issues create a highly non-linear outcome. Engineers don’t like non-linearities… neither does large scale production.

### Act four: resolution

The outcome of this drama is still very unforseeable and here I let you cast the die. One thing is certain — no matter the aftermath — a generation of scientists is gaining momentum along with the new era of metamaterial sciences. However, for the time being, whenever somebody starts talking about how plasmonics will change the world in a year time, I just smile and listen carefully.

# Radiance and Luminance – the many funny units

In electronics, we typically have very well standardized measurement units, at least when it comes to units as Volts, Amperes, Watts and so on… However, I cannot say the same about the optics field. I have always been confused when having to deal with photometric units, but after a recent discussion with one of the camera gurus in our field, I got even more confused, which also gave the inspiration for this writing. So what is the difference between radiance and luminance and how obscure can the life of a camera designer get with such a cocktail of measurement units?

Radiance is a measure of radiometric power ꟷ it measures the rate of light energy flow. So far so good! It is (normally!) expressed in watts or joules/sec per unit area, usually steradian per squere meter.

Luminance is a measure of the power of visible light. But what is visible light? This is where things get confusing. Within Luminance, we can get (generally) two different flavours, possibly inspired by the fact that a human eye has two different photoreceptor cells:

rods ꟷ which are extremely sensitive, and can be triggered by a single photon. So at very low light levels, what we see is primarily due to the rod signal. Which also explains why colors cannot be seen at low light levels, a single photon does not carry color information.
cones ꟷ they require significantly brighter light (more photons) in order to produce a signal. And thus provide us with color information.
photosensitive gangleon cells ꟷ these were discovered recently and are responsible for our biological clock’s synchronization and I assimilate them with electronic comparators.

Okay the last paragraph drifted a bit, back to luminance, the two different flavours of it are:

• Photopic flux is expressed in lumens and is weighted to match the responsivity of the human eye, which is mostly sensitive to yellow-green. But still? What is mostly sensitive to yellow-green? We are all different and there is no single sharp standard.
• Scotopic flux is weighted to the sensitivity of the human eye in the dark adapted state, here as well, what is dark anyway?

Two derivatives of luminance and radiance are irradiance and illuminance accordingly. These are measures of the corresponding light flux per unit area at the receiver side. In other words, radiance is the energy radiated from the light source towards a unit area, and irradiance is the energy received without the light loss during its pathway. Typically expressed as W/m2/sr and lm/m2/sr. But there is more to it, there exist a dozen of other measurement units, and here is where you should get some popcorn and start reading or browsing around wikipeida:

Let’s start with the candela as I will have to refer everything to this unit which is part of the SI standard.

1 candela (new candela) is the intensity of a source that emits monochromatic light of frequency 540×1012 hertz and that has an intensity of 1683 watts per steradian. But why 1683 watts? The number 683 very much smells like a weak british horse’s power to me… Prior to 1948 the candela unit was not standartized and a number of countries used different values for luminous intensity, typically based on the brightness of the flame from a “standard candle”. Ha-ha-ha!

Then we start:

Nit ꟷ 1 cd/m2

Stlib ꟷ it is a unit of luminance for objects that are not self-luminous. Comes from the Greek word stilbein which means “flicker”. 1 stlib = 104 candelas per square meter

Apostlib ꟷ 3.14 apostlib = 1 cd/m2 – somebody thought that can neglect the rest of pi with an such an easy hand!?

Blondel ꟷ 1 blondel = 1/π .10−4 stlib – this unit is obviously reserved to blonde people

Lambert ꟷ 1 lambert = 1/π per candela/cm2

Scot ꟷ 1 scot = 1/10−3 .π candela/m2

Bril ꟷ 1 bril = 1/10−7 .π candela/m2

Foot-lambert ꟷ 1 foot-lambert = 1/π candela per square foot

Foot-candle ꟷ 1 foot-candle = lm/ft2

The image sensors field is baffled with all these units, some of which are still in use today. A possible explanation for why we have so many photometric units, compated to a single sharp one for electric current, is that we as humans naturally have light detectors, but not electric current ones (well, sort of). Each of us owning such receptors can create the perfect environment for speculation. This, combined with the vigorous victorian age pride is possibly the cause for the creation all those weird units. What do you think?

Lastly, here, if somebody asks you, this is one foot-candle!

One foot-candle, courtesy of General Electric

# ATLAS silicon strip detectors and charge sensing

Some time ago, scientists at the Large Hadron Collider (LHC) at CERN reported the potential discovery of a new fundamental particle, which does not fit anywhere in the standard model of physics. According to “the news”, the latest data from ATLAS and CMS (LHC’s two largest detectors) shows two unexpected data “bumps” from the usual gamma-ray flashes, which are correlated and acquired from two separate detectors. According to physicists, this may point to the existence of a particle that dwarfs by light-years even the recent discovery of gravitational waves.

It is not yet sure if this measurement data would get confirmed or rejected, but the latest news point that the significance of the results is fairly low, owning a sigma of 1.6 approximately. That fact inspired me to write a bit about the basics of the basics in silicon strip detector charge sensing, which is a stone age technology in commercial light sensing CMOS image sensors nowadays.

So what are strip detectors and how are they used? These are basically PN-junctions with an extremely wide aspect-ratio and, as their name suggests, look like strips. Here’s a sketch:

A bird’s eye view of silicon strip detectors

These strips usually share an N-type substrate while each is P+ doped, covered by aluminium with some extra insulation layers in between. The LHC scientists are interested in observing interference patterns in X and gamma rays caused by the decay of the sought after particles. Apart of their intensity, what also interests them is the spatial trajectory of the high-energy rays. In order to detect the 2D-position of the gamma rays, they have invented a very clever strip array configuration. Let me explain, here’s another sketch:

Particle incidence angle detection using parallel strip configuration

A falling particle would have a higher probability of generating electron-hole pairs in the strip which is crossed by the X-ray photon, which already creates a kind of a 1-dimensional readout. To obtain the angular information, the adjacent strips could also be read-out and a particle correlation can be reconstructed. In other words, if the gamma ray happens to fall with some angle of e.g. 45 degrees, it will thus generate electron-hole pairs in two or three adjacent silicon strips. This gives us already almost 2D particle trajectory information. However, CERN engineers have decided to expand the technique even further, by adding another cross-pair of detectors underneath the upper set:

Hybrid X- and Y- direction parallel strip sensor configuration

That way not only they can extract position and angular information in the x-direction, but also the y-direction, which, by using some post-processing provides accurate particle intensities and trajectories. But how can these PN silicon strips be read out?

The simplest method in reading out thousands of strips, is the use of an integrated charge amplifier and digitization electronics per each channel. Charge sensitive amplifiers have not been very “widely” used in the past with passive pixel CMOS image sensors, and have proven to be very suitable for single detector readout. These are still used in single-line CMOS line scan sensors due to their low-noise capabilities for low detector capacitance.

Typically, operational amplifier-based integrators using an integrating capacitor in the feedback are a commonly used scheme which is sketched below:

A basic charge amplifier topology for strip sensor readout

These amplifiers have high input impedance, they integrate weak charge pulses and convert them into voltage pulses for amplification and then buffer the output for readout from the next block in the chain. Because of that operation, this type of amplifier is called a “charge amplifier”. The first stage of a charge amplifier is usually a low-noise differential pair and its open-loop gain is set sufficiently high so that its amplification is not influenced by the detector capacitance which reduces the gain in the feedback. The output stage is a low-impedance buffer so it could drive the next circuits in the chain, typically an S/H stage of an ADC.

When particle decay rays strike the silicon strips, signal charge pulses Qs are generated, with an amplitude proportional to the particle energy. Due to this charge generation, the input potential of the charge amplifier lifts up and during the same time, a potential with reverse polarity appears at the output, due to the negative feedback amplifier. However, because the amplifier’s open-loop gain is sufficiently large, its output potential works through the feedback loop so that it causes the input terminal’s potential drop to zero, after some settling time dependent on the unity-gain bandwidth of the opamp itself. As a result, the signal charge pulses Qs are integrated to the feedback capacitance Cf and the output’s voltage changes according to the integrated charge. At that moment, since the feedback resistor Rf for DC is connected in parallel to the feedback capacitor Cf, the output voltage slowly discharges with the time constant determined by τ=Cf · Rf. The output voltage of such a charge amplifier scheme is dampened by the size of the feedback capacitor Cf, thus Qs and Cf must be chosen wisely to fulfill the specifically desired dynamic range. As a result it can be observed that the noise performance and dynamic range of this readout scheme is of highest trade-off. Increasing the dynamic range, leads to a lower swing on the capacitor and hence increases noise, the reverse is also applicable.

Note that the ATLAS detector has a total of over 200 m2 (square meters!!!) of pure detector strips! With a strip size of 0.01mm by 40cm we get a pretty decent number of about 50 000 strips and readout channels respectively. With such a huge set of sensors both ATLAS and CMS rely on the statistical significance of their measurements and the weird correlation in the slight gamma peaks, might truly be caused by a completely new fundamental particle. However, the readout complexity of such an enormous set of sensors is colossal, which makes induction of errors a plausible explanation as well.

Fingers crossed that all the sensing electronics works flawlessly and that all abnormal peaks detected are due to a newly detected particle.

# Chiseling out The Chip!

This post may be a bit redundant with the info I added in the other place, but I am excited, so I felt the need to rewrite some of it here.

Le Chip! This work took a while. To celebrate, I thought it deserves a few words in the blogs. During the past year or so, I was/have-been/will-continue-to-be working on an image sensor ADC testchip. It was finally taped out yesterday! What’s left now is some additional gastronomical work on the tapeout cake and the drainage of a rusty bottle of champagne.

The chip in all its ugly majesty with all these redundant power pads and LVDS pairs.

The core of the testchip is a fast 12-bit column-parallel ramp ADC at 5u pitch, utilizing some special counting schemes to achieve the desired 1us ramp time at slow clock rates. Alongside, to be able to fully verify the pipelined CDS functionality and crosstalk, I’ve built a pixel array in line-scan configuration, some fast LVDS drivers, clock receivers, references, state machines, a few 8-bit iDACs, bond pads, ESD, and some other array-related stuff, all from scratch! The chip has a horizontal resolution of 1024 and 128 lines with RGBW color filters and microlenses.

On the top-left corner there are some experimental silicon photomultipliers and SPAD diodes. These I plan to measure for fun and I promise to post the results in any of the two blogs.

Unfortunately, this chip wouldn’t yield tons of publicaiton work, apart from the core ADC architecture and comparator. To test the ADC one needs a whole bunch of other fast readout blocks, which in the end are not something novel, but yet, one needs them and designing these takes time. Finishing up this test system was a lot of work and I realize that it might be a bit risky and ambitious to be doing this as part of a doctorate. What if it fails to work because a state machine had an inverted signal somewhere? Or the home-made ESD and pads suffer from latch-up? Or the LVDS driver CMFB is unstable and I cannot readout data out? Or there is a current spike erasing the content of the SRAM? Or, or, or ?

We university people don’t have the corporate power to tapeout metal fixes twice a month until we’re there. I probably have another two or three chip runs for my whole doctorate. It may therefore be better (and more fun) to stick with small but esoteric modules, which one can verify separately and have time to analyze in detail. But hey, I’ll quote a colleague here: “It is what it is, let’s think how we can improve things.”

Finally, I have added this little fella who I hope will be my lucky charm.

Mr Le Duck!

With his 15um of height, could he compete in the annual “smallest duck on the planet” contest? Cheers!

# Random “stuff” about silicon photomultipliers and avalanche diodes

This writing may be considered as a very random continuation of an older post briefly mentioning the usage of photomultiplying tubes for gamma spectroscopy, but the truth is that it is so random that could be impossible to follow. I have some spare 300x600um silicon space and have been recently thinking how to fully utilize the area on the chip I am about to tapeout. Apart from having a free corner of silicon, unfortunately, time is not infinite nor free either (what?) and I have just about a week to think, analyze and engineer whatever it would be. This is why I am also posting this, hoping that in the process of writing/re-reading, I would suddenly get that brilliant idea which would vanish all first world problems and get me through academia obscura.

Back to the point. Some 5 years ago, the image sensors field suddenly realized that instead of trying to integrate the current/electrons from photo-electron pairs in the form of a stored charge on a capacitor, one could initiate an ionization process triggered by a single photon in a very high/concentrated electric field PN junction, a process often referred to as impact ionization. The last is essentially the same as the photomultiplying effect in vacuum tubes, now with the difference that the medium where the process occurs is silicon, plus some additional second order effects. The old, photon-hit-electron-integrate-readout, is a technology which has been here for decades, and still impresses us with its mature-immatureness. How is impact ionization a better solution than the conventional? The answer is – it’s not (for now), and it depends.

Here is my simple explanation of why impact ionization detectors, also called Single Photon Avalanche Diodes (SPADs) should theoretically perform better under low light conditions than the conventional technologies. In electronics, we often use the Friis formula, which states that to minimize the noise figure in a signal chain, we should apply gain in the system as early as possible, and/or perform the less noisy operations first:

$F_{total} = F_1 + \frac{F_2-1}{G_1} + \frac{F_3-1}{G_1 G_2} + \frac{F_4-1}{G_1 G_2 G_3} + ... + \frac{F_n - 1}{G_1 G_2 ... G_{n-1}}$

It is very intuitive, and it could be applied to the signal chain of an image sensor too, even though that the signal chain begins with a noisy source of photons (photon shot noise), distorted by microlenses, attenuated by color filters, converted to electrons with noise, electrons to voltage (?) and voltage to a digital number. Most ultra low-noise CMOS image sensors use the so called High Conversion Gain (HCG) pixels. In simple language, this means that their integration capacitor (FD – floating diffusion) is minimized as much as possible compared to the photodiode junction capacitance. This results in a larger voltage swing on the integration capacitor (FD) per hit photon, which is basically equivalent to maximizing the gain at the very beginning of the photon-electron conversion process. Remember the Friis formula?

Why does a SPAD look promising for ultra-low-light imaging? Avalanche photodiodes, basing on impact ionization have an enormous gain, thus, a single photon can push the trigger causing the diode to hit the rail. Makes life easier for the rest of the measurements too, instead of a complex ADC, we can just use a counter. Sounds brilliant, however, there are some difficulties which prevent us from reaching perfect photon counting. Here’s a small list which I am thinking about right now:

1. The gain in a SPAD may be considered infinite, according Friis the output should be noise-free. However, SPADs, for now, are triggered not only by photons, but also by random thermal excitation and defect traps, sudden releases and gamma ray impact with electrons. The main quality parameter of a SPAD is its so called Dark Count Rate (DCR), or false triggers per time unit under dark conditions. This is a very primitive measurement, however, until now there is no good method for quantifying what part of the DCR is caused by the respective aforementioned side effects.
2. After initiation of avalanche breakdown, in order to arm the diode for another measurement round, we need to cut its power supply and then gradually apply high reverse bias voltage again. The time used for the operation is called reset time. This reset (dead) time is the major obstacle for achieving full single photon detection for high light intensities.
3. SPADs work under high reverse bias voltage conditions which makes them a hard to integrate with readout electronics on 1v2/3v3 CMOS processes, while still keeping a low reset time contribution from the readout.
4. SPAD structures can be easily implemented in standard CMOS and this has been done by a number of research teams during the past 5 years.
5. Most of the researchers are working on SPADs for Time of Flight (ToF) imaging, or ultra-low-light sensing.
6. Most of the research is done on multi-channel readout, which is actually the way to go, but is very challenging in a 2D process (no 3D stacking).
7. Can we use multiple arrays, but have a single readout?
8. Do we now have access to hundreds of PM tubes on a single chip? What could we use these for?

Access to hundreds of “PM tubes” on a single chip? – we call that a silicon photomultiplier (SiPM). Such have existed for a long time and are offered as discrete components, however, the information from each PM is hard to measure, all PMs share the same bus and we get a very difficult for measurement output current. Difficult, in a sense that is noisy and hard to distinguish. Conventional readout of SiPMs integrate and low-pass filter the output current before performing measurement/digitization. To grasp what I am referring to, here’s a simple electrical equivalent diagram of a SiPM:

Silicon Photomultiplier equivalent diagram

You essentially see a number of SPAD diodes with passive quench (the resistor in series) which acts as an automatic reset. When the diode fires the resulting high in-rush reverse bias current causes voltage drop on R (and the SPAD’s cathode respectively) which acts as a feedback mechanism and prevents the avalanche breakdown from continuing, thus resetting the diode by gradually increasing the cathode voltage. Typically SiPMs on the market have passive quench (resistor in series) with the SPADs, the latter are connected in parallel an could reach a relatively large number in the order of 100-1000s. The order depends on their dark and firing current and well as desired photosensitivity. To help you get an idea of how a SiPM should look physically, here’s an example of a small 8×8 SPAD array in SiPM configuration I just sketched in Virtuoso:

An 8×8 diode Silicon photomultiplier array layout diagram

The circular shape of the junctions comes from the fact that we want to have a strong electric field around the junction which should make the diode more susceptible to avalanche breakdown. Ideally it should be entirely circular, in the case above I’ve used hexagonal shape as this CMOS process does not alow other than 45/90 degree angles. Using hexagonal diodes generates stress electric field points which increases the dark count rate. One possible structure for SPAD formation in standard triple-well CMOS is a junction between a NW/DNW and a local P-Well created over the Deep N-Well. The latter local P-Well has the shape of a doughnut and acts as an electric field concentrator. Here’s a cross section sketch: