Layman's Guide to Computing

Archive

Status update

It’s been a good one-month break, so I figured it’s a good time to post some new updates!

To new subscribers: hello! 👋 this newsletter was active on a weekly basis up until 23 April 2022, when I sent out the 13th issue of the 13th season. Up till this point, I had been structuring the content around themes, with each season covering each theme one concept at a time. From this point on, I will be posting on a much slower tempo, and no longer in seasons—that means more unstructured posts.

A recent spate of AI-related news, the most recent one being Google engineer Blake Lemoine claiming LaMDA is sentient, got me thinking that maybe AI has finally entered mainstream awareness. While we can’t exactly introspect the inner workings of a machine learning model, I don’t think that is an excuse for treating it as a black box. The explanations I’ve found from tech publications don’t explain what is actually going on in machine learning; time for a layman’s guide!

But I’m definitely out of my depth here, since this is not a topic I’ve had the chance to study even at an undergraduate level. Not that it’s going to stop me! I have been unpacking technical explanations beyond my depth for three years now, no reason to stop at machine learning and AI models.

#172
June 18, 2022
Read more

[LMG S13] Issue 169: Search engine optimisation

Previously: A search engine uses bots to build up a database of URLs and their contents. The search engine uses various algorithms to determine the most relevant results for a search request.

Let’s get to it: why are search results so bad so often?

PageRank

While PageRank is no longer the only or even the dominant algorithm for ranking search results, it is probably the most familiar one to most people and is easy to understand.

#171
April 23, 2022
Read more

[LMG S13] Issue 168: Search engines

Previously: Fragmentation is likely a contributor of system slowdown, particularly for mobile devices: the database used by most mobile apps tend to store data in many small chunks rather than fewer big chunks, which slows down data search operations. The most effective measure for improving device responsiveness is usually to clear the app cache, so the app does not attempt to read previous data from storage.

Last issue, we shed a little light on the mystery of why phone and laptop systems slow down over time—apparently the way a file database works is to blame?

This week, we switch topics, to look at something we definitely take for granted: search engines!

What is a search engine?

#170
April 16, 2022
Read more

[LMG S13] Issue 167: Database fragmentation

Previously: There are easy and quick ways to check the validity of the most common advice for resolving system slowdown. But it still seems to happen even after these tips have been tried.

Last issue, we talked about caches and why they are no longer as effective as a performance-boosting measure.

This issue, let’s look into a solved problem that is not-as-solved on Android: file fragmentation.

Storage fragmentation on mobile devices

#169
April 9, 2022
Read more

[LMG S13] Issue 166: A cause of system slowdown: caches

Previously: There are easy and quick ways to check the validity of the most common advice for resolving system slowdown. But it still seems to happen even after these tips have been tried.

Last issue, I walked through common causes of system slowdown suggested by generic tech websites, and explained simple ways of checking if these are really the cause. Quite often, they are not, especially if you are the kind who is careful about internet usage and does regular system maintenance.

So what is going on?

Caches, caches, and more caches

#168
April 2, 2022
Read more

[LMG S13] Issue 165: The myths of system slowdown

Previously: Linux software is distributed through Linux distros. The maintainers of distros maintain repositories of software that have been tested with the distro. Most users will access software in the distro’s repositories through a program called a package manager. So users have full control over when updates and new software should be installed.

Once your laptop hits the magical 1-year window, it somehow seems to … get slower. And slower. Everything takes just a fraction longer. What used to happen near-instantaneously now seems to take a split-second pause. The loading spinner animation feels like it plays just a little longer. And it just gets worse from there with age.

Google search results have a number of things to say about why it happens:

Programs starting up when booting

#167
March 26, 2022
Read more

[LMG S13] Issue 164: Linux, the universal operating system

Previously: Software that we use usually comes from the OS makers, or from third-party developers. These two groups of developers are not the same, and might even have conflicting intentions and goals.

Last issue, we looked at the following categories of software that an end-user might need:

  • System updates
  • Software by the OS maker (first-party software)
  • Software from other developers (third-party software)

In general, “untrusted” software comes from other sources: compact discs or the internet. While “trusted” software comes from a central, authorised source: usually some kind of app store.

#166
March 19, 2022
Read more

[LMG S13] Issue 163: System & software ecosystems

Previously: Typeface families consist of multiple fonts for each style in the typeface. Each font consists of glyphs, which are mathematical shapes described by curves joining points. These shapes need to be rasterised for display on a computer screen, or for printing on paper. Font files usually come in .ttf, .otf, or .woff formats.

Brief recap of the past few issues:

  • Content distribution: Images and other media are distributed with the help of content distribution networks (CDNs, Issue 160), which have regional servers closer to users.
  • Code distribution: Webpage documents and web scripts (in Javascript) are distributed from the host server (which may comprise more than one computer).

And all of these takes place over the World Wide Web, often through the HTTP protocol (Issue 7). That is how data gets to us when we use the internet.

#165
March 12, 2022
Read more

[LMG S13] Issue 162: Fonts

Previously: Cross-site scripting attacks occur when a webpage loads malicious code from a third-party, usually carried out by a script in the page. Today, websites are protected from loading unauthorised scripts through cross-origin resource sharing (CORS) policy implemented in browsers, which only allows a website to load scripts from authorised domains.

This is the issue that doesn’t really fit anywhere, but this season is about lots of things we take for granted and fonts are one of them.

I could probably fill at least half a season talking about fonts and typesetting, but let’s stick to the basics here.

What is a font?

#164
March 5, 2022
Read more

[LMG S13] Issue 161: Security and XSS

Previously: A content delivery network comprises multiple servers around the world that are able to quickly distribute static content (typically images and video) to viewers that request it. This avoids overloading the hosting server, which would otherwise have to serve data over the network, possibly through many intermediary hops.

When you load a modern webpage with all its bells and whistles, it is usually loading its content from a content delivery network (CDN; see previous issue). At the same time, it is running scripts that came with the webpage. These scripts may load other scripts on the same server (first-party scripts), or scripts on other servers (third-party scripts).

What could go wrong?

First-party scripts

#163
February 26, 2022
Read more

[LMG S13] Issue 160: CDNs and content distribution

Previously: Instead of GPS satellites, smartphones can also use wifi points and cell towers to determine their position (if enabled in the OS).

All businessmen know that distribution is everything. How good your product is, is secondary to how you get your product to the customer. This act of getting things to your customer—it’s called distribution, and entire businesses have been built around excellent distribution.

In Issue 157, I described how time is synchronised from time source to server and on to other servers, down the strata of the hierarchy tree of time servers. whereas GPS/wifi location (Issue 158) has a much shallower distribution system: everybody gets their location directly from a GPS satellite if there’s nothing else available, otherwise they get it from the nearest wifi point or cell tower.

What about content?

#162
February 19, 2022
Read more

[LMG S13] Issue 159: Wifi & cell tower location tracking

Previously: To get your location using GPS, your phone requests information from four overhead GPS satellites: their location, and the distance between them and your phone. With this information, your phone can calculate its location.

Okay, so what happens when you are in a tunnel or building and can’t get GPS? How are you still able to use Google Maps to navigate that new sprawl of a mall?

Wifi Positioning System (WPS)

The principles of triangulation still work within a building, thank math 🙏 but now we need other landmarks to replace GPS satellites.

#161
February 12, 2022
Read more

[LMG S13] Issue 158: GPS

Previously: Time is synchronised from higher-precision sources through a protocol called Network Time Protocol (NTP). A public pool of time servers is available for synchronisation at pool.ntp.org.

Ah, GPS. The only topic that actually has almost nothing to do with computing … and yet the mobile computers in our pocket rely on it so much.

A short history

The Global Positioning System (GPS) was born of the space age, in 1973, before computers even went mainstream. It was originally used for military applications, particularly for navigation. It was first widely used in a political conflict in the Gulf War (1990–1991). The public finally had access to it in 1996, after US President Bill Clinton issued a policy directive for it to be dual-use (used for both military and civilian purposes).

#160
February 5, 2022
Read more

[LMG S13] Issue 157: NTP and time-syncing

Previously: To speed up execution and avoid translation overhead, some systems employ ahead-of-time translation, storing the translated instructions to be executed in future. But many systems employ a mix of just-in-time (JIT) and ahead-of-time (AOT) techniques.

This season, I’ll attempt to plug the gaps in the layperson’s working knowledge of Internet-related services. Time, location, wifi and mobile data … almost all will be covered this season!

Global time information

Frequent fliers would no doubt be familiar with the existence of timezones: geographical bands stretching from the North to South pole, within which all locations are assumed to be running on the same regional time. These timezones used to be manually synchronised, by phone or telegram, via operators all over the globe.

#159
January 29, 2022
Read more

[LMG S12] Issue 156: Translation

Previously: Translating a set of instructions before executing it will always lead to a slowdown, although sometimes this may not be noticeable to users.

So, just-in-time (JIT) compilation is really cool and mostly works. Feed in enough instructions to fill a buffer, and execute them. Keep your fingers crossed and hope the buffer doesn’t empty. That’s kind of how our global supply chain works too.

But sometimes it doesn’t go smoothly. The program hits a code branch, new instructions have to be unpredictably injected. The emulation layer halts temporarily. The program stutters.

We can’t really avoid that, not without rewriting the program anyway. But we can at least decide when to carry out the translation.

#158
January 22, 2022
Read more

[LMG S12] Issue 155: Emulation performance

Previously: Programs that were not compiled for the instruction set of the host OS have to go through an emulation layer program. This program translates the instructions of that program into compatible instructions that its own processor can execute.

The Apple M1 is an ARM processor that executes 64-bit ARM instructions. MacOS programs that were compiled for Intel 64-bit x86-64 processors go through the Apple Rosetta 2 emulation layer to run on the M1.

Yes, that’s what I said last issue. But if that were all the Apple Rosetta 2 emulation layer did, The M1 Macbook would not have gotten its rave reviews.

The act of translation

#157
January 15, 2022
Read more

[LMG S12] Issue 154: Emulation

Previously: The cloud offers standard digital business services, accessible through a web interface and API, which any developer (with a credit card) can use. Developers don’t have to reinvent the wheel, so long as they know how to use web APIs.

Virtualisation, particularly system virtualization, is a real game-changer for those of us who like to have our apps all running in the same operating system, instead of switching operating systems all the time through dual-booting (or Apple Parallels).

But what is stopping us from allowing them to run near-natively in the desktop, their windows directly showing up in the taskbar, without the distracting abstraction of the virtual machine?

Introduction to Emulation

#156
January 8, 2022
Read more

[LMG S12] Issue 153: Using the cloud

Previously: Actually making a web application requires you to set up lots of supporting software and carry out lots of steps to create a suitable app environment.

Last issue, I described the whole host of things that need to be done just to make a web application work on another server, different from where you did your programming.

How do people deploy web services so quickly if there is so much tedium involved?

Birth of the Cloud

#155
January 1, 2022
Read more

[LMG S12] Issue 152: Getting started with programming

Previously: The Java Runtime Environment (JRE) bundles the Java VM and supporting libraries. The JRE has to be installed on the user’s system for Java programs to work, unless the program bundles the supporting libraries. Solo programmers can start programming with OpenJDK for free with fewer features and less support, while commercial companies can license Oracle JDK for better support and features.

So you started taking up programming. Maybe you went to a class, where everything was set up for you and you didn’t have to worry about installing and configuring necessary software. Or you took an online course, where step-by-step instructions were provided and you mostly didn’t have to spend time scratching your head. That’s how it should be; you paid to learn programming, not to learn how to configure a software development environment.

Once you actually have to start writing code though …

Setting up a development environment

#154
December 25, 2021
Read more

[LMG S12] Issue 151: the Java VM

Previously: System VMs provide a set of virtualised hardware that the OS interacts with. Process VMs provide a set of libraries that a program (written in that programming language) interacts with.

If the Java VM lets us write programs that work across multiple Oses, why don’t we write everything in Java then?

Actually a lot of enterprises do! But there are some tradeoffs to make this work.

What’s bundled

#153
December 18, 2021
Read more

[LMG S12] Issue 150: System VMs vs Process VMs

Previously: Containers are one layer of virtualisation above virtual machines: containerisation systems virtualise access to the operating system, presenting a virtual interface that provides software with the resources it needs, without being aware of software running in other containers on the same system.

Recap

If I need to configure an entire machine to install and configure my own operating system (OS), I can rent a virtual machine — this is system virtualisation.

If I just need to run a set of software on a particular OS but don’t want the hassle of managing the rest of the OS, I can containerise them for the OS — this is containerisation.

#152
December 11, 2021
Read more

[LMG S12] Issue 149: History of commercial computing - containerisation

Previously: Renting out virtual hardware instead of physical hardware meant that instead of having to move hardware around and manage it, you could send the data for running an OS to the hosting company and have them be responsible for hardware operations.

Business concerns

Every business computer you have encountered likely runs an operating system (OS). And yet, what value does managing the OS have for the business? They have business software to run—point-of-sale systems, accounting systems, communication systems e.g. email—but the OS is no big concern for them, as long as it runs the software!

If I have point-of-sale software that only runs in Windows, and I’m paying for a company to provide it as a service, I don’t care if the software actually runs on Windows with direct hardware access, or if it is doing it through a virtual machine, so long as it works.

#151
December 4, 2021
Read more

[LMG S12] Issue 148: History of commercial computing - cohosting

Previously: Running a virtual machine is like running a physical machine, but within a window in your OS.

Co-located hosting

A not-so-long time ago, to run a website, you literally just ran a webserver on your desktop, connected it to the internet, and gave your IP address to other people. This is a pretty unreliable way to host a business website though. A big company would make business arrangements to procure a reliable internet connection, set up the infrastructure (power, cooling, mounting hardware) required to run multiple computers, and then manage their multiple systems with a full IT management team (hardware & software).

Not every company can afford this. Smaller companies would therefore co-locate their computers (called colo boxes) with bigger companies, enjoying service support and infrastructure for a monthly/yearly fee. Some companies decided to just provide these services as their full-time business, and the hosting business was born.

#150
November 27, 2021
Read more

[LMG S12] Issue 147: Operating systems on virtual hardware

Previously: Virtual hardware can be created in the form of drivers that respond to a program’s requests for hardware resources. If a bootup program enumerates hardware devices and receives a response, then as long as it continues to receive valid and correct responses, it can work with the virtual hardware to run an operating system.

So … what is it like to run an operating system (OS) on virtual hardware? I promised screenshots, but they probably won’t be as exciting as you expect—it looks quite normal!

Creating a virtual machine

I don’t want to purchase a VMware license, so I will be using an alternative virtual machine product instead: Oracle’s free Virtualbox. This is what it looks like, running on Arch Linux on my laptop:

#149
November 20, 2021
Read more

[LMG S12] Issue 146: Virtual hardware

Previously: Programs do not usually deal with the gnarly details of hardware, but instead access it through an interface. They access storage devices through a filesystem, and access hardware through drivers.

How does one trick an operating system (OS) into coexisting with other operating systems on a single machine? By virtualising hardware into virtual drivers!

Virtual network hardware

Let’s take an example. Take a look at your network devices: there's one for your LAN port, there’s one for your wifi card, and these days there may be one each for your Bluetooth chip and 4G/5G modem too.

#148
November 13, 2021
Read more

[LMG S12] Issue 145: What an app wants, what an app needs

Previously: In 1999, VMware launched VMware Workstation, which allowed multiple operating systems to run off a single machine.

In Season 5 (), I went into some detail on how our programs work. The programming language they are written in gets into CPU instructions, which get carried out by the CPU.

#147
November 6, 2021
Read more

[LMG S12] Issue 144: Programs-in-a-vat

Previously: The Apple M1 is a souped-up iPhone processor, with unified memory.

I want to circle back to talking about processors again in this season, because there are a couple of pretty world-shaking ideas I haven’t fully fleshed out in Layman’s Guide yet.

One of them is—hmm where do I begin. As early as 1641, in Meditations on First Philosophy, Descartes proposes that “All that up to the present time I have accepted as most true and certain I have learned either from the senses or through the senses; but it is sometimes proved to me that these senses are deceptive, and it is wiser not to trust entirely to anything by which we have once been deceived.” In other words, Descartes isn’t always sure that he believes what he sees; his senses sometimes deceive him about the nature of reality.

More than three centuries later, in 1999, the Wachowski brothers translate this idea into a more modern form: what if the world as we know it is a simulation running on some other cosmic, otherworldly hardware? Is it possible to signal to our senses so convincingly that a simulacrum may be thought of as real?

#146
October 30, 2021
Read more

[LMG S11] Issue 143: Implications (Part 2) – Future Goals

Previously: Using the same hardware for both smartphones and laptops would make it much easier to write apps for both platforms. The closer they are in features, hardware, and software support, the easier things will be for developers.

So, let’s get some Likely-Asked-Questions (LAQs) out of the way in this last issue.

#145
October 23, 2021
Read more

[LMG S11] Issue 142: Implications (Part 1) - Software

Previously: The Apple A14 and Apple M1 are essentially the same chip architecture: they use almost the same building blocks, just with different numbers of them. On top of that, the Apple M1 implements unified memory, allowing the CPU and GPU (and other SoC components) to share the same system memory, greatly facilitating intra-chip communication.

So, before 2020: smartphones are smartphones, laptops are laptops. They use different types of CPUs with different architectures (Issue 141) and even different instruction sets (Issue 53). Never the twain shall meet.

After 2020: It turns out that smartphone chips can be upgraded and used in laptops, while remaining essentially the same architecture? Its power consumption dial can be turned down to almost zero but also turned all the way up?

That opens up the possibility that smartphones and laptops can run on the same hardware, and there’s nothing technically stopping apps compiled (Issue 54) for that instruction set to run on both!1

#144
October 16, 2021
Read more

[LMG S11] Issue 141: The Apple A14 and M1

Previously: Shared memory is easier to implement when a company has control over the designs of both CPU and GPU.

So, to recap:

Most companies design either CPUs or GPUs, but seldom are well-positioned1 to be excellent in both.

Among the companies that design both CPUs and GPUs, almost none of them2 make CPUs for both mobile (smartphones + tablets) as well as laptops (including low- to mid-range desktops).3

#143
October 9, 2021
Read more

[LMG S11] Issue 140: The shared memory dream

Previously: Around 2015, the high-performance computer industry quickly realised that this would be much more efficient if the CPU and GPU could share the same memory. This idea was labelled heterogeneous systems architecture (HSA).

Let’s rewind a bit further from last issue. That was in 2015.

Circa 2009, changes were happening on the desktop motherboard, as the memory controller hub (MCH) came on-board the CPU to reduce latency when communicating with memory (Issues 134–135). But the memory chips themselves remained on the motherboard, and this was the case even in 2018, in Apple’s Macbook Air (Issue 136).

Bringing memory on-board

#142
October 2, 2021
Read more

[LMG S11] Issue 139: What’s before this line is mine, what’s after this line is yours

Previously: A system-on-chip (SoC) combines the core functionality of a system—processing, graphics, memory, and control—into a single chip package.

I am eager to dig into the meat of the A14 and M1! But first I must set up a story.

The hUMA race

Circa 2015 (actually even a couple of years before that), the industry suddenly seemed to wake up and realise that graphics cards could do a lot more than just play video games. The nature of how they work (Issue 121 & 122) makes them very amenable to solving problems in scientific computing, particularly in simulations, which use up computational resources by the petaflop, and energy by the megawatt.

#141
September 25, 2021
Read more

[LMG S11] Issue 138: System-on-Chip (SoC)

Previously: The M1 goes one step further: not only does it make do with fewer chips, it does so with passive cooling.

In Issue 136, I showed the miniaturisation of the Macbook mainboard through a series of pictures. While the laptop has remained the same size mostly (apart from getting slimmer), that is not the case with its components. The bigger components, like memory and storage, changed from being separate discrete parts to being another component soldered directly to the mainboard.

But that only gets us so far; even in the M1 Macbook Air, the mainboard is still almost the entire length of a phone. There’s got to be something else.

Today, let’s see how the iPhone has evolved.

#140
September 18, 2021
Read more

[LMG S11] Issue 137: The M1 Macbook Air

Previously: Slim laptops have been undergoing a gradual transition: more and more of their chips are no longer available as a replaceable card, but instead soldered directly to the mainboard. Since 2017/2018, most slim laptops pretty much have CPU, memory, storage, and network chips all soldered directly to the mainboard.

Let’s get to it: Intel vs M1 Macbook Air!

The 2020 Macbook Air: passing the torch

Here’s the Macbook Air in 2020. There was one in early 2020 using an Intel Core CPU, and one in late 2020 using the Apple M1 CPU.

#139
September 11, 2021
Read more

[LMG S11] Issue 136: The mobile workstation – laptops

Previously: A modern CPU is manufactured through a process called photolithography, by which the CPU components are etched onto the silicon substrate by successive layers of chemicals, masking, and laser exposure. When the CPU components could be made small enough, the MCH and CPU were designed onto the same chip, and this is the design used by the Intel Core i7 (1st-gen).

In the last 4 issues, I walked through the general evolution of desktop computers. Let’s go more mobile, and look at laptops. How does something as big as a desktop shrink down to the size of a laptop? And what are the tradeoffs involved?

I addressed the power part of the formula in Issue 130, on power limits; laptops are slimmer in part because part of them—the AC adapter—lies outside the system.

Let’s look at the rest of it.

#138
September 4, 2021
Read more

[LMG S11] Issue 135: Part 2 – Unifying the CPU and MCH (post-2008)

Previously: Light takes 0.3 ns to travel 10 cm, approximately the distance by wire between the CPU and the MCH. This potentially causes operations between the CPU and MCH to slow down by one cycle, at frequencies above 3 GHz. One way the Intel Core i-series resolves this conundrum is to move the MCH into the CPU.

Chipset diagram of ATX systems for Intel Core (i-Series)
An Intel Core i-series ATX system chipset diagram.
The MCH is merged into the CPU, but still a discrete unit.
DDR refers to computer memory, while GDDR refers to graphics card memory (Issue123)
Source: Ars

Time to close up some open plot points from last issue:

  1. The number of pins on 1st-gen Core i7 is almost triple that of the Pentium 4; what are all those pins for?
  2. The MCH has been moved into the CPU to improve latencies, but how is it possible to make it small enough to do that?
  3. Are there any disadvantages?
#137
August 28, 2021
Read more

[LMG S11] Issue 134: Part 1 – the Intel Core i-series launches!

Previously: The ATX form factor also brought with it a new breed of computers with more specialised chipsets: the memory controller hub (MCH) and peripheral controller hub (PCH). The MCH specialises in high-throughput requirements, such as computer memory and graphics. The PCH specialises in lower-throughput needs.

Last issue, we looked at the ATX form factor by Intel, which replaced the AT form factor by IBM. While the AT could get by with a smattering of chips, which worked fine for mostly text-only computers, the ATX has much higher throughput requirements. To help the CPU focus on serving the user’s applications, two chipsets—the memory controller hub (MCH) and peripheral controller hub (PCH), take charge of managing the data throughput. The MCH manages data between CPU, computer memory, the graphics processor unit (GPU), and the PCH, while the PCH manages data between the peripherals (audio, storage, network, USB, ...) and the MCH.

Chipset diagram of ATX systems, up to early Intel Core (i-Series)
An Intel pre-Core i-series ATX system chipset diagram.
The MCH and PCH (labelled ICH here for unimportant reasons) support the CPU in its data operations
DDR refers to computer memory, while GDDR refers to graphics card memory (Issue123)
Source: Ars

There are terms for each of the connections between chips, which I won’t get into because it largely won’t concern us until we have to design performant systems.

#136
August 21, 2021
Read more

[LMG S11] Issue 133: the ATX form factor (post-1995)

Previously: Chipsets served as go-betweens in the AT form factor by IBM.

In 1993, Intel launched its Pentium line of processors; barely two years later, in 1995, Intel launched the ATX form factor. This was the beginning of Intel’s dominance in the desktop space, and they could well afford to dictate most of the standards for this form factor.

Chipset diagram

Mainboards at this point were complicated enough that as part of the marketing, tech publications had taken to staring at diagrams of how the chips were connected. These diagrams are called chipset diagrams.

#135
August 14, 2021
Read more

[LMG S11] Issue 132: the AT form factor (pre-1995)

Previously: CPUs have limited throughput, since there is a max frequency they can operate at, and a limit to the number of wires they can be connected to (throughput = no. of wires × frequency). Later designs of early computers increased the capability of computers by delegating more work to secondary chips.

When computers began hitting the mainstream market, they were designed to be able to use interchangeable parts so as to reduce cost and inventory. To support this effort, manufacturers came up with standards for how to lay out computer components on a mainboard; the different patterns came to be known as form factors.

The AT form factor, by IBM, is one of the early ones. An AT motherboard looks something like this:

The AT mainboard

#134
August 7, 2021
Read more

[LMG S11] Issue 131: What do early CPUs and startup founders have in common?

Previously: AC power from the wall uses electric current that alternates directions, while DC power from batteries uses electric current that flows in one direction only. All electronics are DC-only, and require an AC-DC adapter to be powered from the wall. The AC-DC conversion produces a significant amount of heat; AC-DC adapters are usually external unless the device has sufficient space or cooling capacity for it.

This season, let’s open up that computer case and see what’s inside. Where does everything fit, and how does all that information get around? More importantly, how are computers able to cover such a large range of sizes, from towering desktops to tiny smartphones?

What a computer wants, what a computer needs

The common model of a computer is that it … computes. It calculates. It takes in numbers, and spits out more numbers.

#133
July 31, 2021
Read more

[LMG S10] Issue 130: Power limits

Previously: The larger the surface area, the faster an object loses heat. The larger the temperature difference between an object and its surroundings, the faster the object loses heat. Heat is bad for computers, and CPUs will need cooling to be able to process computations quickly. A mobile phone thus typically uses no more than 4 W of power, a laptop can use 25–45 W, and a desktop can usually use 65 W and more. Two popular ways of increasing the cooling capacity of a device is to attach a larger piece of metal to the chip (passive cooling), or use a fan to force air over the heatsink (active cooling).

Point 1: A powerful device produces lots of heat.
Point 2: A device that produces lots of heat needs a large surface area (directly in contact with the heat source) to stay (relatively) cool.

These are two of the primary factors determining how tiny a computer can be. Can something the size of an iPhone be as powerful as something the size of a Macbook? It depends on how much cooling is available to it!

One more factor to add in this issue: power. Without power, none of your devices would work … and that is one more source of heat to be dissipated, incidentally.

#132
July 24, 2021
Read more

[LMG S10] Issue 129: Cooling

Previously: Upgradable parts need a slot or socket to be inserted into; these slots/sockets need to be made robust enough, causing them to take up more space than a soldered part. Devices which were designed to be small and portable generally eliminate these as far as possible, opting to have parts directly soldered to the board instead.

Why do computers need power?

Other home appliances I can understand. They need to heat up air/water, move air/water around, or extract heat from air/water to move it elsewhere. These things all need energy. But a computer … all it does is move electrons around! All the information in a computer that changes is just electrons moving; that should not need so much power, should it?

As it turns out, energy-information equivalence theories posit that manipulating information does increase entropy which does involve energy—but this is the Layman’s Guide to Computing, not a physics newsletter. Let’s just say that managing information, in its abstract sense, needs very little energy.

#131
July 17, 2021
Read more

[LMG S10] Issue 128: Upgradeability

Previously: USB Power Delivery is a specification that describes how much voltage and current can be supplied by different categories of USB cables. It allows power delivery at different levels for all kids of connected devices, up to 100W. This should help to simplify cable setups that otherwise require multiple kinds of cables between two closely interconnected devices (such as a laptop and an external monitor).

At some point in the past, computers could be upgraded with all kinds of parts: you could upgrade to a better network card, a better processor, or add more memory, without changing out the entire computer!

The short history of personal computers

After IBM released the IBM 360, the first mainframe that was meant to be used in both large-scale and small-scale applications, it quickly realised that providing service support for it was going to be a nightmare if each version required its own specialised support. In the 1970s, it was already thinking of a family-of-parts concept that allowed its products to use a set of interchangeable parts to reduce the number of these parts.

#130
July 10, 2021
Read more

[LMG S10] Issue 127: USB Type-C Power Delivery

Previously: USB is a (licensed) technical standard that describes how devices connect to each other through a cable. USB Type-C is a new connector standard that supports USB 3, DisplayPort, HDMI, and Thunderbolt. It is able to carry multiple types of data simultaneously, in limited combinations. In a USB connection, one device acts as the host while the other acts as the device; the host initiates all communication.

Last week, I differentiated the USB Type-C specification from the USB3 specification; the former describes how the cable and connector should be, while the latter describes how to transmit USB data over a Type-C cable. Remember that in addition to USB3 data, the Type-C cable can also transmit HDMI or DisplayPort video data!

#129
July 3, 2021
Read more

[LMG S10] Issue 126: USB Type-C

Previously: Analog formats such as VGA mostly contain the control signals that the CRT needs to operate, while digital formats such as HDMI and DisplayPort contain image data that the device must convert to control signals. Analog signals need a digital-analog-conversion (DAC) chip to be converted to digital signals, hence VGA-HDMI adapters tend to be more costly than DisplayPort-HDMI adapters. Dedicated graphics cards generally support more simultaneous output video streams than integrated graphics cards.

This week, I attempt to untangle the confusion around USB Type-C, informally also referred to as USB-C.

What is USB Type-C?

It is a connector standard. It sets standards for this connector:

#128
June 26, 2021
Read more

[LMG S10] Issue 125: Analog and digital conversion

Previously: The VGA video format originated in the time of cathode-ray televisions (CRTs). It was superseded by HDMI, a video format standardised by consumer electronics companies. DisplayPort, on the other hand, is a video format standardised by computer display companies.

The bulk of the story has been written in Issue 123, so this issue will be short.

Why two digital formats? HDMI vs DisplayPort

HDMI is a consumer electronics standard, and is thus heavily focused on broadcast and home video needs. HDMI primarily supports video and audio data. It also carries some control signals through CEC (for Consumer Electronics Control) capability, enabling a video game console or set-top box to send remote-control commands to a television set via the HDMI connection.

#127
June 19, 2021
Read more

[LMG S10] Issue 124: Video formats

Previously: Graphics cards contain lots of tiny cores that are much better at performing the same calculation for lots of decimal numbers. These cores are organised into compute units; a graphics card with more compute units can perform more calculations every second. Graphics cards have their own onboard memory, separate from the CPU. GPU memory is different from computer memory; it is configured for much higher data throughput. Integrated graphics are GPUs that are integrated into a CPU chip; these do not have their own onboard memory, and share memory with the CPU.

Ah, the esoteric, tricky, complicated art of shooting electromagnetic radiation into the eyes of humans … entire tomes have been written about this. And I will attempt to summarise the pertinent parts into a single newsletter issue. The hubris!

It’s really something when you suddenly remember that television has been around since the 1930s, while computers in some recognisable form were a 1970s invention. The first part of the computer to be invented was the screen!1

How did screens work if computers weren’t invented? A crash course:

#126
June 12, 2021
Read more

[LMG S10] Issue 123: Graphics cards: The Pixel Factory

Previously: Computers are general-purpose machines that usually process integer calculations. The graphics pipeline requires more specialised hardware that can process decimal number calculations. This is why high-performance graphics usually requires a graphics card.

So why are gamers so agog over graphics cards (also known as video cards)? That’s because they do one thing really well! Unlike CPUs which often have to process an unpredictable workload, the graphics pipeline involves performing the same categories of calculations over and over again.

Graphics compute units

These calculations, which I gave an overview of in Issue 122, take in tables of numbers, crunch them mathematically, and spit out another table of numbers. Since the calculations are predictable, we don’t need very complicated hardware that enables switching instructions based on the input. We can used specialised cores—clusters of transistors that are custom-fit for the purpose, cram lots of them into a circuit board, and end up with much better performance for the graphics pipeline compared to the CPU.

#125
June 5, 2021
Read more

[LMG S10] Issue 122: The great flattening

Previously: 3D models are represented with vertices (points), edges (line segments between points), and faces in a computer. Images known as textures can be mapped to faces to give the impression of detail.

Having a model represented in a computer as a large set of numbers is cool, but nobody does 3D modelling like that. We need something to look at! We need a way to convert our model into a flat picture, ideally displayed on our monitors. And this conversion process needs to be fast enough that as we rotate or change the view of our model, the computer can keep up, displaying the changes in real-time.

#124
May 29, 2021
Read more

[LMG S10] Issue 121: In graphic detail

Previously: Driver files provide information about the driver, and instructions on how to receive information from the device, and encode information to be passed to the device. The operating system may come with generic driver files for the device, but custom driver files might provide better performance or additional features.

This issue, let’s start from scratch with graphics: how does a machine that only processes 1s and 0s work with graphics? For starters, let’s think: what can we represent about graphics if we only have numbers?

#123
May 22, 2021
Read more
 
Older archives
Brought to you by Buttondown, the easiest way to start and grow your newsletter.