Wednesday, August 31, 2016

Telairity Dives Deep Into 4K Technology – Part 4

The value of UHD over HD is that it allows us to get closer to screens of the same size, or view
bigger screens at the same distance, with no change in visual quality. In either case, the screen
will appear bigger to us, i.e., occupy more or our total viewing area. And that, we said, means
UHD enables a more immersive or higher quality viewing experience.

This improvement, however, is not free. Its cost is quadrupling the number of pixels per
display, from about 2 million to about 8 million. What are the implications of multiplying
pixels?

Digitally speaking, every pixel is a number, specifically a binary number that represents a
specific color shade. For each pixel, the display reads its number, and generates the colored
block appropriate for that number in the location appropriate for that pixel in a size
appropriate to the resolution format for a display of the given dimensions.

The pixel numbering standard in common use today for broadcast television is so-called “8-
bit” color, which generates a binary number 24 bits long for each pixel, sufficient to enable a
total palette of over 16 million colors.1 Since 16 million is more color shades than even the
most discerning human eye can distinguish, 8-bit color (24 bits/pixel) is sometimes called
“true color”, as the first and simplest digital color scheme to enable everything the human
eye can see (and more).2

The problem created by digital imagery in general, and HD and UHD television in particular,
isn’t that digital technology is inferior to older analog technology, or that it is inadequate to
express the full range of our senses. It is simply that digital technology able to provide a high
quality experience takes a lot of bits, and improvements in quality take even more bits.

Specifically, an HD picture composed of 2 million pixels, each corresponding to a 24-bit
number, requires 48 million bits to express. And that is just for a single frame. Full HD plays
out at 30 frames a second, meaning a total bit rate of nearly 1.5 billion bits every second.

This is not just a large number; it is an overwhelming number. It is impractical to store 1.5
billion bits for every second of HD video captured, let alone transmit bits at that rate.
Fortunately, there is a powerful remedy for the proliferation of bits required by digital
rendering technology, namely digital compression technology. Compression technology is
especially powerful for video, where standards like H.264 allow the elimination of 299 bits
out of every 300, reducing 1.5 billion bits a second to a much more manageable 5 million bits
a second.

But what happens to data rates when the television industry shifts from HD to UHD? In the
next part of this series, we will look at the dark underside of the move to UHD display
technology.

Telairity has made a name for itself as the industry’s leading video processing solutions provider. Please write in to us at sales@telairity.com to learn more about our products and to collaborate with our team.


1.Why are pixels 24 bits long described as “8-bit color”? It’s because “8-bit color” refers not to pixel length, but
rather to “channel” length, or the number of bits used to encode each of the 3 primary colors (Red-Green-Blue)
that make up a pixel. Adding the 3 8-bit primary color “channels” together gives the overall total of 3 x 8 or 24
bits/pixel. There are 256 8-bit binary numbers (possible combinations of 1s and 0s between 00000000 and
11111111). Thus, an 8-bit channel provides 256 distinct shades each of Red, Green, and Blue, or 256 x 256 x 256
= 16,777,216 “mixed” colors.]

2.Although the long strings of 1s and 0s that comprise binary numbers can seem quite daunting on first encounter, understanding binary numbering is really very easy. The basic rule is just that every bit added to a binary number doubles the number of possible combinations supported. This can be seen most readily by starting at the beginning, with 1 bit, which has only 2 possible values (0, 1). Adding a second bit allows 4 possible values (00, 01, 10, 11). And so on: 3 bits have 8 possible values (000, 001, 010, 011, 100, 101, 110, 111), 4 bits have 16 possible values, 5 bits 32 possible values, etc. By the time you reach the 8-bit values used in “true color” RGB encoding, this doubling algorithm has passed by 64 (6 bits) and 128 (7 bits) to reach 256 possible combinations. The doubling rule itself is most readily understood by the fact that adding a bit simply allows us to write all the numbers of the previous set twice over, the first time tacking a 0 on to the front of all the previous numbers, the second time tacking on a 1 (e.g., compare the 8 3-bit values with the 4 2-bit values shown above). 8 bits is regarded as “true color” since it is the first channel value safely past the outer limits of human color perception. That is to say, if you build up a color bar out of 256 strips, each with an adjacent shade of, for example, red—running from a strip of pure red on one end to a strip of pure black (no color) on the other—this color bar will not appear to the eye as 256 distinct stripes, but rather as a single continuous gradient, shading from red to black by insensible steps. Which is to say, when a color is divided into as many as 256 distinct steps, we have moved below the threshold of noticeable differences between adjacent steps—in other words, no one can tell shade 1 from shade 2, shade 2 from shade 3, and so on down the row of 256 shades. In fact, for most people, the same would be true of a color bar built up from 128 strips (7-bit channels), but the very keenest eyes under ideal conditions might be able to distinguish very faint stripes in this bar. So 7-bit color channels (128 x 128 x 128 = 2,097,152 mixed colors) are not quite past the limits of human perception. But 8-bit color channels, which multiply the number of mixed colors by 8 (= 2 x 2 x 2), are easily sufficient to include not only all the colors anyone might ever be able to distinguish under any circumstances, but many millions more besides that no one can tell apart from their neighbors.]

Wednesday, August 17, 2016

Telairity Dives Deep Into 4K Technology – Part 3

In a nutshell, here is the whole technical difference between an HD display and a UHD display:
since UHD formats cram 4X the number of pixels onto a screen as HD, for screens of the same
size, UHD pixels are ¼ the size of HD pixels; conversely, for pixels of the same size, UHD
screens have 4X the viewing area of HD screens.


The Difference Between HD and UHD for a Viewer


In simplest terms, then, the whole viewing difference between an HD display and a UHD
display comes down to just one point: bigger screens with no loss of visual quality—where
“visual quality” is measured by the single metric of apparent pixel size. It makes no difference
whether you replace your old display with a new UHD display of the same size and move
closer to it; or keep the same viewing distance, but replace your old display with a bigger UHD
display. In both cases, the effect is exactly the same: the screen looms larger in your visual
space.


UHD Provides a More Immersive Viewing Experience


The ability to increase apparent screen size with no loss of visual quality is not everything, but
it is not nothing, either. The apparent size of a screen in our viewing area is a key factor in
what is generally called viewing immersion; indeed, the illusions of virtual reality are created
largely by covering our entire viewing space with a screen.

By this analysis, then, the advantage of UHD over HD is primarily its ability to create a more
immersive viewing experience, by allowing us to get closer to screens of the same size, and
view larger screens at the same distances, with no loss in visual quality. This is presumably a
good thing, at least when we want to be more immersed in what we are viewing. But, like
many good things, UHD has its own trade-offs.

The Cost of the UHD Experience


The most obvious trade-off for UHD is simply the cost quadrupling the number of pixels per
video frame, from about 2 million to about 8 million. As a viewer, you might think that doesn’t
matter, as long as advancing display technology makes new 8-million pixel UHD screens
available in the same price range formerly paid for comparable 2-million pixel HD screens.
Like an iceberg, however, the implications of multiplying pixelsrun far deeper than the visible
surface of a UHD screen. We will turn to that topic in the next part of this series.

Telairity has made a name for itself as the industry’s leading video processing solutions provider. Please write in to us at sales@telairity.com to learn more about our products and to collaborate with our team

Wednesday, August 3, 2016

Telairity Dives Deep Into 4K Technology – Part 2

Continuing the discussion of resolution standards for television displays we began in Part 1,
the important point about digital bitmap formats like HD and UHD is that they fix the number
of pixels a display has, independent of screen size. Every “full HD” screen is an array of 1920
x 1080 pixels, whether the screen measures 30” or 70” or some other number. Similarly, every
UHD screen is an array of 3840 x 2160 pixels, regardless of how large or small the UHD screen.

Pixels Size and PPI


Obviously, with a fixed number of pixels—roughly, 2M in a 2x1 “2K” HD bitmap, 8M in a 4x2
“4K” UHD bitmap—what must happen as an HD or UHD screen gets larger or smaller is that
the individual pixels in the array must grow or shrink in size accordingly. This brings us to yet
another critical metric for displays, known as ppi or pixels-per-inch. Although an old idea
(familiar to anyone who has ever bought a raster printer as dpi or dots-per-inch), this metric
was first popularized for displays by Apple, with the term “retina display”, meaning a display
where the pixels are too small to be individually distinguished by the human eye, even on
close scrutiny. In ppi terms, pixels get too small to be seen (by all but the most eagle-eyed)
somewhere just short of the number 300, so a “retina display” is any screen with a ppi number
of 300 or greater.

The Importance of “Recommended Viewing Distance”


The notion of ppi, in turn, brings us to our final critical metric for this discussion, viewing
distance. Even the largest pixels can be made too small to be individually distinguished by the
human eye, by the simple expedient of moving the eye further away from the display. This is
the principle behind “Jumbotron” displays, which have pixels the size of playing cards (or
bigger), but are designed to be viewed from hundreds of feet away.

If TV screens were built to retina display standards, intended to withstand close scrutiny from
a few inches away, they would be disappointingly small. An HD screen built to the “retina
display” threshold of 300 ppi would be smaller than 7 x 4 inches (about the size of many
current smartphone screens). Even an UHD “retina display” would be less than 13 x 8 inches.

The reason TV screens of 50” and more are common is simply that TVs are not designed for
close up “retina display” viewing. As screens (pixels) are made bigger, the adjustment made
by display manufacturers is simply to increase the recommended viewing distance (thereby
maintaining a constant apparent pixel size in the eye of the viewer). Conversely, as screens
(pixels) get smaller, viewers are allowed to gradually move closer, following recommended
viewing distance guidelines, again with no change in the apparent pixel size.

It’s taken awhile, but we are now ready to explain the impact of the shift from HD to the new
UHD resolution standard … in the next part of this series.

Telairity has made a name for itself as the industry’s leading video processing solutions provider. Please write in to us at sales@telairity.com to learn more about our products and to collaborate with our team.