# Random processes in time and frequency – attempt for an infograpic

Some time ago, I got fascinated by a few infographics from absorptions. The dial-up modem poster deserves special attention!

I have been looking at some topics on noise recently and decided to attempt assembling my first intentional infographic. Here it is:

Random Processes in Time and Frequency

The background is a 24 hour full spectrum scan (waterfall chart, 6 MHz – 2.2 GHz) which I captured with my SDR dongle last summer up in the Balkan mountains. Here you can find another version.

It ended up being a bit messy and probably not very useful, neither for novices, nor for professionals in the field. Nevertheless, I have accumulated ideas for another infographic which should be very entertaining. Stay tuned for a follow-up post.

# Chiseling out The Chip!

This post may be a bit redundant with the info I added in the other place, but I am excited, so I felt the need to rewrite some of it here.

Le Chip! This work took a while. To celebrate, I thought it deserves a few words in the blogs. During the past year or so, I was/have-been/will-continue-to-be working on an image sensor ADC testchip. It was finally taped out yesterday! What’s left now is some additional gastronomical work on the tapeout cake and the drainage of a rusty bottle of champagne.

The chip in all its ugly majesty with all these redundant power pads and LVDS pairs.

The core of the testchip is a fast 12-bit column-parallel ramp ADC at 5u pitch, utilizing some special counting schemes to achieve the desired 1us ramp time at slow clock rates. Alongside, to be able to fully verify the pipelined CDS functionality and crosstalk, I’ve built a pixel array in line-scan configuration, some fast LVDS drivers, clock receivers, references, state machines, a few 8-bit iDACs, bond pads, ESD, and some other array-related stuff, all from scratch! The chip has a horizontal resolution of 1024 and 128 lines with RGBW color filters and microlenses.

On the top-left corner there are some experimental silicon photomultipliers and SPAD diodes. These I plan to measure for fun and I promise to post the results in any of the two blogs.

Unfortunately, this chip wouldn’t yield tons of publicaiton work, apart from the core ADC architecture and comparator. To test the ADC one needs a whole bunch of other fast readout blocks, which in the end are not something novel, but yet, one needs them and designing these takes time. Finishing up this test system was a lot of work and I realize that it might be a bit risky and ambitious to be doing this as part of a doctorate. What if it fails to work because a state machine had an inverted signal somewhere? Or the home-made ESD and pads suffer from latch-up? Or the LVDS driver CMFB is unstable and I cannot readout data out? Or there is a current spike erasing the content of the SRAM? Or, or, or ?

We university people don’t have the corporate power to tapeout metal fixes twice a month until we’re there. I probably have another two or three chip runs for my whole doctorate. It may therefore be better (and more fun) to stick with small but esoteric modules, which one can verify separately and have time to analyze in detail. But hey, I’ll quote a colleague here: “It is what it is, let’s think how we can improve things.”

Finally, I have added this little fella who I hope will be my lucky charm.

Mr Le Duck!

With his 15um of height, could he compete in the annual “smallest duck on the planet” contest? Cheers!

# Applied chaos theory

Today I am having fun with a very nerdy circuit – Chua’s circuit. An electronic circuit that exhibits chaotic behaviour. I had a hard time getting this circuit to oscillate, but finally, after some prayers, throwing beans and doing black magic here it is:

Chua’s circuit

Heheh, looks terrible, but works. Here are some cool pictures of the Lorenz attractor. I was not able to spread the two twirls further, as my LC tank (in fact gyrator-capacitor tank) would stop oscillating.

Lorenz attractor curves tiral 1

Lorenz attractor curves tiral 2

Lorenz attractor curves tiral 3

Lorenz attractor curves tiral 4

And in case you also want to see how it sounds like and what my setup was here’s a short video clip:

# Visible light communications

Not long ago I listened to a talk led by Dr Sujan Rajbhandari giving a brief overview of visible light communications and some in-depth details about one of the latest projects happening at the optical wireless communications group here.

Let me give you a brief outline of their system and some of the main challenges in visible light communication (from now on abbreviated as VLC). Let’s start by stating some of the challenges:

1. VLC can not be established through solid medium i.e. visible light is blocked by walls, obstacles etc…

2. P.1 may be considered as something positive as it allows us to implement a very secure communication link; light does not escape the room as opposed to an RF Wi-Fi signal for example. On the other hand, the former obstructs us from achieving high coverage too.

3. Visible light sources (bar some lasers) have a quite low bandwidth as compared to what is possible to achieve with an RF modulator. I.e. Dr. Rajbhandari noted that the highest bandwidth LED they used, which was by the way custom made, had a BW of ~50MHz (on top of my head, may be more?). 50MHz gives us a maximum theoretical link BW of 25MSPS at one pulse-amplitude level (on-off keying). Adding-up the receiver (photodiode) bandwidth limitations, communication medium, noise, etc… drops down the link bandwidth dramatically.

4. External light happens to be a severe noise contributor to the link. I.e. we want to be in a bright lit room and still use a VLC link, however the white? daylight shines the receiver (photodiode) too, we are thus getting a huge interference and practically a failure to establish a link

5. What if my lamp (transmitter) does not shine my receiver directly, but via a reflection? More SNR losses and an increase of the bit error rate (BER).

From the few points above it becomes obvious that inventing a good VLC system is not an easy task. Even so, the group here reported connection speeds of 3Mbps at a distance of one meter, which currently appears to be a world speed record in VLC systems.

I would like to share some pictures of their excellent work (huge thanks to the group for letting me have a look at their device and for organizing the interesting weekly seminars in optical communications).

The visible light communication link setup.

VLC Transmitter – Left; VLC Receiver – right; link distance – 1m

A close view of the transmitter and its colimating lens

TX driver with pulse-amplifde modulation controller

The RX side, lens and active photodiode sensor RX matrix.

Link demo, sinewave-driven LEDs and thresholded receiver output.

The group tried various channel transmission schemes, such as Multiple-Input Multiple-Output (MIMO) or Single-Input Single-Output (SISO) schemes. The beauty here is that the colimating lens allow us to use MIMO link setup, as to effectively increase the bandwidth of the system.

Such smart schemes would hopefully mature enough so that can be implemented in the real world. I.e. they even tried using 2, 4 or 8 pulse-amplitude-modulation levels for achieving an even higher bandwidth. Have a look at their publications if you have curiosities about their system.

# Signals, transistors and music

Hej hopp! Not long ago (well, technically last year) I was helping out a fellow in the lab next door with his RF LNA simulations. All of a sudden our afternoon turned into a discussion about various guitar distortion effect pedals and their circuit implementations. Today I want to show you one of the simplest possible active circuits for distorting a signal.

So what do we call distortion? Distortion is the deviation of an output signal waveform from some sort of reference signal. Said in such a way one might argue that an amplified signal is also a distortion of sorts, however when we speak about distortion we normally refer to the time-domain (wave shape bending) and not simply an amplitude scale-up.

In mathematics and engineering the sine wave is the simplest possible waveform in a sense that one can generate any other type of waveforms by adding-up multiple (infinite?) number of sine waves. In practice we can not add an infinite number of sine waves together and thus we are limited to what we can generate. Out of the scope of this topic, have a look at the Gibbs phenomenon if you want to escape the ideal world of mathematicians and are interested in practical waveform generation, yet somewhat involving maths again. In music, normally there aren’t such instruments that produce pure single sine wave tones (bar some tuning forks at e.g. 440 and 880Hz) but for the ease of explanation the rest of my writing would be using pure sine wave signals as a reference.

Back to circuits, here is the simplest possible electronic distorter circuit I can currently think of:

A half wave rectifier.

This circuit removes a part of the fundamental frequency (half wave pass-through) but introduces a more noticeable proportional octave frequencies which might often be very desirable.

Moving into an active element solution, a d-effect of most electronic amplifiers can be utilized for achieving desirable distortion. The phenomenon is often called clipping. Here is an illustration:

Sine wave clipping

To give a better overview and not waste much time I decided to hook-up a simulink model and fetch some plots. However after some frustration with this amateur tool, I decided to go back to my dear good old friend Virtuoso and hook-up a common source amplifier at a 180nm process node. Here is the common source amplifier testbench I used for my examples.

An NMOS common source amplifier.

An input-output transfer function, input bias voltage DC sweep.

First I did a DC sweep to find the suitable operating point. Have a look at the amplifier’s transfer function (right), even though that we often assume that at a certain region it is linear, we can still see that it actually isn’t (green curve), not with this 180nm CMOS process at least. I have put a fairly high impedance load of 1MOhm as to increase the gain of this circuit which is approximately equal to $A_{0} = g_{m} \times R_{L}$. So, even if it is properly (in the mid output range) biased at ~550mV and applying a 1kHz sine wave with a reasonable swing so we don’t overdrive the amplifier one can still observe some minimal distortion. On the left side the input (red) ideal sine wave and output (green) amplified sine wave can be seen. To the right is an FFT spectrum plot of the amplified (green) sine wave. You can also see some FFT spectral leakage as I was too lazy to set-up a Fourier transform with coherent sampling.

A fairly low-distortion output sine wave.

However, what happens when we overdrive the amplifier? As we have a limited by the power supply output swing the output starts to saturate and thus we start to clip the sine wave. The higher the input drive swing the higher the clipping and distortion at the ouput. One can observe on the FFT that the fundamental tone power of 1KHz is now re-distributed at a number of octaves.

A higher overdrive.

Heavy distortion, note that the second octave is about 1/3 of the fundamental tone power:

Very high overdrive, a heavy distortion

Seeing all these pictures we can conclude that the various combinations of harmonics give us all these different (some pleasurable, others not so) sound effects. Various amplifier/distorter circuits sound different and it is up to the musician/designer’s taste. Back in the old days of modern music (e.g. Pink Floyd et.al.) musicians were to some extent circuit designers/experimenters, tuning-up the perfect circuit implementation which suits their needs. In the past 20 years direct digital synthesis and signal processing has offered a number of benefits to musicians, however the discrete tunability nature of these devices provides a number of limitations when it comes to distortion effects. I was aiming to find a way to play my sine waves (this is why I initially approached simulink, as it has an audio sink function) however it is a somewhat lengthier operation and I am leaving it to your imagination.

It is fascinating to see that such a simple electronic phenomenon has dramatically added-up to the variety of music available today. I offer you an example of a very slight distortion in combination with a German flute and some digital delay and harmonization. Only 50 years ago one couldn’t even imagine such sounds. Hmm, what would music sound like by the time I reach retirement age?

# Psychoacoustics and the auditory masking phenomenon

Happy New Year dear fellow geeks!

Many new year celebrations around the world involve at some point listening to the national anthem, or any other background music. During new year’s eve I had an intriguing discussion with friends on music, and data compression. I have decided to take the handle and transfer a part of our discussion into this post.

We are constantly surrounded by sounds and it is our brain’s task to distinguish between various tones, take the important for us and filter out what we (it) decides to be useless. The first part of this post aims to give a brief overview the psychoacoustic phenomenon named “auditory masking”, while the second elaborates on the perceptual coding schemes used for lossy music data compression. In the next few lines you will not see any formulas or complicated modelling, since I want to focus on the principle and draw some conclusions.

We all know that the human ear can normally hear sounds with frequencies from 20 Hz up to around 20 kHz and the latter limit can vary with age. To illustrate a bit, here is how a “bode” plot of the human ear looks like:

An equal loudness human ear sensitivity plot.

The perceived frequency resolution can vary within the hearable range, however its finest region appears to be within 1-4 kHz, which also matches the pitch of most human speech. This property together with the “masking” phenomenon would help us understand what the human ear can not perceive and therefore can be excluded from the music information we store.

So what is auditory masking? Imagine being on a bus station, talking to a friend while suddenly a noisy bus arrives at the bus stop. You are then no longer able to hear your friend, but you can clearly see his lips moving and he is in fact producing audible speech. His speech has been masked by the noise from the bus engine. Here is an example, narrowed down to two / a few tones:

Auditory masking phenomenon example.

Beyond a certain amplitude (marked as mask threshold) we can no longer perceive the weaker tone of interest. This is known as the auditory masking phenomenon:

Auditory masking phenomenon, a more general case.

It is observed that depending on the frequency of the masker tones and tone of interest, the mask threshold varies and the further away the masker tones are from the main tone, the higher the mask threshold is.

Knowing this phenomenon associated with the human ear, back in 1995 Fraunhofer Institute for Integrated Circuits in Erlangen came-up with a clever idea known as perceptual coding. Perceptual coding is a lossy audio encoding scheme reducing quantization steps for sounds which are inaudible and contained in the music/sound data. The nowadays popular mp3 encoding scheme utilizes perceptual coding and the masking phenomenon for lossy audio compression.

Because I like drawing, here is a simple block diagram of the perceptual coding scheme:

A principle diagram of the perceptual coding scheme.

The core of the encoder is the perceptual human ear model, based on the latter, a number of bandpass filter banks are tuned (depending on the ear’s critical bandwidth at various frequencies). We need these to be able to on a later step reduce specific information (bands) detail. The used frequency tone spacing (filters) determines the masking threshold.
The perceptual model steers a variable quantization and sampling rate control block. In practice, the quantization steps of an inaudible tone falling into the corresponding filter bank (band-pass filter) would be greatly reduced. Digital arithmetic number rounding or truncation is normally used for information reduction, however a variable sampling rate control can also be sometimes utilized. After information reduction, some additional noiseless encoding (e.g. look at huffman coding) is performed, before the final bit-stream packing.

Ideally, if the perceptual human ear model has infinite precision, such coding schemes should appear perfect/lossless. Unfortunately this is not the case in reality, but hey, nevertheless, mp3 has changed the world. Isn’t this an incredibly elegant invention of the 20th century? Here is my example for an application of this clever lossy compression scheme 🙂

# Dithering and is no noise a good noise? – Part one.

The technique which I will try to briefly describe in this post is often referred to as dithering. The term according to Wikipedia comes from the old English word “didder”, meaning to shiver, shake.

During continuous to discrete-time signal conversion a so called quantizer component is used. Quantization in signal processing is a process of converting the information from a fine continuous-time (analog) signal to one represented in finite amplitude steps by sampling. It is important to mention that time sampling (ideally w/o quantization) does not generate noise and if one ensures that the sampling frequency exceeds the Nyquist rate, then a smooth signal is again obtained and no information is lost. In reality however in a sampled system we can not save each sample with its exact value, but instead we often need to quantize (digitize) it and store it with a limited number representation (bits). Having a limited set of numbers means that in reality there would be a discrepancy between the real analog signal and the quantized (digitized) one. Here is a quick sketch showing what I actually mean:

An analog signal and its quantization

The analog signal is quantized in a finite set of steps and the error between the real sampled and the converted (quantized) values is shown in the hatched area. Now, it has been shown that with the help of oversampling techniques one can statistically increase the resolution of a quantizer by generally speaking accumulation and averaging.

My post aims to show how the effect of oversampling and therefore Signal to Noise (SNR) ratio can be increased by injecting even more noise into the system. At first glance one might say – no way! However, that’s not always the case. If the quantizer is coarse (so that no circuit thermal noise would influence it much) and if the injected noise (dither) has the right magnitude we can gain a lot. Let’s have a look at a simple and intuitive example. Here is an overview of our system consisting of a signal source $x$ random uniform noise source $d$ and a quantizer (2-bit) to ease-up our drawings later-on.

Analog additive dither and a quantizer

Let’s apply a sinewave $x$ as our input signal and skip adding a dither signal $d$ , then the input and output of the quantizer would look like this:

An ideal quantized sinewave

Now if we inject random uniform (a key role, to be discussed in Part 2 of this post) noise $d$ with an amplitude of one least-significant bit (LSB) we can observe the following picture:

A quantized sinewave with additive dither

It comes to ones intuition that if we try to average the samples we might actually have done a higher resolution quantization. Think low-pass filtering. However to give a very clear illustration I’ll introduce a numerical example. Consider a DC signal of 2.5 Volts and an ideal quantizer of 2 bits:

An example of DC signal oversampling with and without dither injection.

We can see that without any dither the average conversion value is still 3, while by adding dither we randomize the error and after averaging we have gained additional resolution. Obviously my 2.5V example is a bit too idealistic and there is an awful lot more to this, but I just wanted to make my trial at introducing this concept here. Some other day I will write some about analog additive and digital subtractive dither, as well as its impact on images.