Computer users of a certain age will no doubt remember the Commodore 64 and its big sister, the Commodore 128.
Alongside the original IBM PC, the Apple II series and the BBC Micro B, Commodore’s computers played a big part in defining the early years of the computer age.
Bil Herd was the engineer at Commodore who led the team that put together the Commodore 128 in just five months and – in a new article at Hackaday, he explains how he and his team did it:
My name is Bil Herd and I was that long-haired, self-educated kid who lived and dreamed electronics and, with the passion of youth, found himself designing the Commodore C-128, the last of the 8-bit computers which somehow was able to include many firsts for home computing. The team I worked with had an opportunity to slam out one last 8 bit computer, providing we accepted the fact that whatever we did had to be completed in 5 months… in time for the 1985 Consumer Electronics Show (CES) in Las Vegas.
The most amazing aspect of Herd’s team’s effort is that they weren’t just able to get the chips they needed off the shelf. Instead, they had to design and create them for themselves:
We (Commodore) could do what no other computer company of the day could easily do; we made our own Integrated Circuits (ICs) and we owned the two powerhouse ICs of the day; the 6502 microprocessor and the VIC Video Display IC. This strength would result in a powerful computer but at a cost; the custom IC’s for the C-128 would not be ready for at least three of the five months, and in the case of one IC, it would actually be tricked into working in spite of itself.
Of course, this was never going to be just an engineering project. It quickly became apparent they would have to deal with commercial pressures as well:
To add to the fun, a couple of weeks later the marketing department in a state of delusional denial put out a press release guaranteeing 100% compatibility with the C64. We debated asking them how they (the Marketing Department) were going to accomplish such a lofty goal but instead settled for getting down to work ourselves.
If you feel nostalgic for the simpler early days of computing, or just love a good engineering challenge, Herd’s first-hand explanation of putting together this iconic computer of the ‘80s is worth a read.
How Bitcoin transactions work
Bitcoin has taken the world by storm. But how, exactly, do transactions take place? What are the mechanics of how a transaction works?
It’s a question answered in quite a clever way in this detailed blog post by physicist Michael Nielsen.
Nielsen begins by explaining that, at its core, Bitcoin is a cryptographic protocol for sending, processing and receiving messages about transaction, rather than a “currency”, in its traditional sense:
It may seem surprising that Bitcoin’s basis is cryptography. Isn’t Bitcoin a currency, not a way of sending secret messages? In fact, the problems Bitcoin needs to solve are largely about securing transactions — making sure people can’t steal from one another, or impersonate one another, and so on. In the world of atoms we achieve security with devices such as locks, safes, signatures, and bank vaults. In the world of bits we achieve this kind of security with cryptography. And that’s why Bitcoin is at heart a cryptographic protocol.
So what, exactly, is this Bitcoin protocol and how does it work?
Nielsen explains it, step by step, by sketching out a simple example of a cryptographic protocol he calls “Infocoin”, and then adding features to it until he arrives at how Bitcoin works:
My strategy in this post is to build Bitcoin up in stages. I’ll begin by explaining a very simple digital currency, based on ideas that are almost obvious. We’ll call that currency Infocoin, to distinguish it from Bitcoin. Of course, our first version of Infocoin will have many deficiencies, and so we’ll go through several iterations of Infocoin, with each iteration introducing just one or two simple new ideas. After several such iterations, we’ll arrive at the full Bitcoin protocol. We will have reinvented Bitcoin!
But why would you bother understanding the Bitcoin protocol? It turns out there are very good reasons why you should:
Understanding the protocol in this detailed way is hard work. It is tempting instead to take Bitcoin as given, and to engage in speculation about how to get rich with Bitcoin, whether Bitcoin is a bubble, whether Bitcoin might one day mean the end of taxation, and so on. That’s fun, but severely limits your understanding. Understanding the details of the Bitcoin protocol opens up otherwise inaccessible vistas. In particular, it’s the basis for understanding Bitcoin’s built-in scripting language, which makes it possible to use Bitcoin to create new types of financial instruments, such as smart contracts. New financial instruments can, in turn, be used to create new markets and to enable new forms of collective human behaviour. Talk about fun!
If Bitcoin continues to be adopted as a way of processing transactions, these additional features are likely to become incredibly valuable – it’s worth getting familiar with them at the most basic level.
The power to fuel the cloud
Cloud storage is becoming a ubiquitous part of life in the digital age. But often, users pay little thought to the massive amounts of energy required keep warehouses full of computer servers online.
It’s an issue James Glanz from the New York Times investigates:
Today, the information generated by nearly one billion people [with Facebook accounts] requires outsize versions of these facilities, called data centres, with rows and rows of servers spread over hundreds of thousands of square feet, and all with industrial cooling systems.
They are a mere fraction of the tens of thousands of data centres that now exist to support the overall explosion of digital information. Stupendous amounts of data are set in motion each day as, with an innocuous click or tap, people download movies on iTunes, check credit card balances through Visa’s Web site, send Yahoo e-mail with files attached, buy products on Amazon, post on Twitter or read newspapers online.
The problem with the explosion of data centres in recent years, Glanz argues, is that many are very inefficient in their energy use:
Most data centres, by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show. Online companies typically run their facilities at maximum capacity around the clock, whatever the demand. As a result, data centres can waste 90% or more of the electricity they pull off the grid, The Times found.
To guard against a power failure, they further rely on banks of generators that emit diesel exhaust. The pollution from data centres has increasingly been cited by the authorities for violating clean air regulations, documents show. In Silicon Valley, many data centres appear on the state government’s Toxic Air Contaminant Inventory, a roster of the area’s top stationary diesel polluters.
In a highly competitive tech industry, many security-focused businesses are apprehensive about giving out the locations of their data centres, let alone their energy use. However, in the US, information can be obtained through various other means:
To investigate the industry, The Times obtained thousands of pages of local, state and federal records, some through freedom of information laws, that are kept on industrial facilities that use large amounts of energy. Copies of permits for generators and information about their emissions were obtained from environmental agencies, which helped pinpoint some data centre locations and details of their operations.
In addition to reviewing records from electrical utilities, The Times also visited data centres across the country and conducted hundreds of interviews with current and former employees and contractors.
With a growing number of consumers and businesses concerned about environmental issues, as data use grows ever more insatiable, this is likely to be an increasingly important issue over the coming years.
Why there are three different types of Windows
If you’re a novice Windows user, the difference between Windows Phone, Windows RT and Windows 8.1 can be confusing.
As Peter Bright over at Ars Technica argues, at least on the surface, reducing the number of versions of Windows is a very appealing idea:
At the moment, Microsoft has a bunch of consumer-facing Windows-derived brands: Windows 8.1 for x86 and x64 PCs, Windows RT for ARM PCs, and Windows Phone for smartphones. According to research firm Canalys, that’s at least one too many, with Windows Phone and Windows RT specifically named as confusing “to both developers and consumers alike.” Both operating systems are used on “smart devices,” so why have two?
Last week Julie Larson-Green, head of Microsoft’s Devices and Studios Engineering Group, told the audience at a UBS investor event that Microsoft was “not going to have three” operating systems in the future. Larson-Green outlined a need for two operating systems: a locked down mobile-oriented one and a full-strength one for tasks that need full flexibility.
However, Bright points out that there are some big obstacles in the way of combining Windows 8.1 with Windows RT:
The Windows RT branding as such carries two unrelated implications. The first is the use of an ARM processor. The second is a locked-by-default software environment. Neither of these things is going to go away. While a future in which Intel processors are abundant in smartphones is plausible in a way that it once wasn’t, thanks to the latest generation of Atom processors, ARM support is, for the time being, a non-negotiable feature for any smartphone or tablet operating system.
Likewise, there are technical challenges in combining Windows Phone with either RT or 8.1:
For Windows Phone 8, Microsoft wanted to use the NT kernel. The NT kernel is more capable and is where most of Microsoft’s development effort is spent, so this made sense for the company (if not for end users). Since the development of Windows RT meant that the Windows software stack ran on ARM, there was no longer any reason to stick with Windows CE. Accordingly, Windows Phone 8 shares major parts with Windows 8, with low-level components such as the network stack and security infrastructure in common between the operating systems.
Windows Phone apps can’t use Windows’ Win32 API. Nor can they use most of the new WinRT API… This makes Windows Phone 8 a strange orphan operating system. Windows Phone 8 has few APIs in common with either Windows or Windows RT, so while iOS and Android phone apps can also be used on iOS and Android tablets, Windows Phone apps are strictly for the phone alone.
If you’ve ever wondered why Microsoft’s product line is structured the way it is, Bright provides a good technical explanation.