The tale of how Microsoft founder Bill Gates got computer giant IBM to use DOS on its first PC, known as DOS, before going on to sell it to every other computer maker in the industry, is now the stuff of tech legend.
But few people remember that there was a second chapter to the saga of Bill Gates and IBM: The story of OS/2. It’s a tale explored by Ars Technica’s Jeremy Reimer.
For the uninitiated, the tale begins when IBM gave Gates and Microsoft the big break they needed:
Over the course of the day, a contract was worked out whereby IBM would purchase, for a one-time fee of about $80,000, perpetual rights to Gates’ MS-DOS operating system for its upcoming PC… In exchange for giving up perpetual royalties on MS-DOS, which would be called IBM PC-DOS, Gates insisted on retaining the rights to sell DOS to other companies.
However, because of Gates’ move to sell DOS to other computer makers, by 1984 IBM found itself getting cut out of the very PC market it helped to created:
Intel and Microsoft were getting rich, but IBM’s share of the PC pie was getting smaller and smaller each year. … IBM needed to design a brand new operating system to differentiate the company from the clones. Committees were formed and meetings were held, and the new operating system was graced with a name: OS/2.
On another front, competitors including Apple, Amiga and Atari were beginning to sell computers that used graphical user interfaces and mice. IBM was forced to create something more user-friendly than the old text-based interface for DOS:
There was another problem that was happening in 1985, and both IBM and Microsoft were painfully aware of it. The launch of the Macintosh in ’84 and the Amiga and Atari ST in ’85 showed that reasonably priced personal computers were now expected to come with a graphical user interface (GUI) built in.
IBM responded by making OS/2 a mouse-based operating system, like its competitors. However, the bureaucracy at IBM ended up creating a product that used far more memory than its rivals. This would come back to haunt IBM in a way it hadn’t predicted:
RAM prices had been trending down for years, from $880 per MB in 1985 to a low of $133 per MB in 1987. This trend sharply reversed in 1988 when demand for RAM and production difficulties in making larger RAM chips caused a sudden shortfall in the market. With greater demand and constricted supply, RAM prices shot up to over $500 per MB and stayed there for two years.
Seeing the slow adoption of memory hungry OS/2, Bill Gates set about working on a less memory intensive alternative to OS/2 called Windows 3.0. Soon, IBM and Microsoft were at war:
IBM still saw Microsoft as a partner in the operating systems business, and it offered to help the smaller company by doing a full promotional rollout of Windows 3.0. But in exchange, IBM wanted to buy out the rights to the software itself, nullifying the DOS agreement that let Microsoft license to third parties. Bill Gates looked at this and thought about it carefully—and he decided to walk away from the deal.
The price of memory was one of a string of fateful business decisions that ended up sinking IBM in its battle against Microsoft and Windows. And as Reimer points out in his article, there are a number of vital modern day business lessons from the saga for the tech businesses today.
Why 9000 Twitter spambots targeted one Californian teenager
Alexis Madrigal at The Atlantic has a tale about how one Californian teenager had a very nasty surprise after coming home from school one night:
Get SmartCompany FREE to your inbox every weekday.
It was around 5pm last Thursday when Olivia, a San Diego high school student, noticed that something interesting was going on with her Twitter account.
A swarm of 30 [spambots] had just followed her on the social networking service… At 9:05, she crossed 4,000. At 9:51, she hit 5,000. She changed her Twitter bio to, “5,000 pornstars follow me and idk what to do.” (idk means “I don’t know” for the acronymically uninitiated.)
When Madrigal looked through the fake Twitter accounts that targeted Olivia, a few patterns quickly emerged:
The first thing I noticed: Olivia wasn’t part of every bot in the swarm’s follow list, but she was predominant. No other account that I could find had been targeted so often, not even Lebron James.
The second thing I noticed: the spambots were following a lot of golf caddies. I couldn’t explain that one immediately, but keep it in mind.
The third thing I noticed: Olivia wasn’t the only San Diego high schooler. At least three other San Diego high schoolers, two of whom Olivia knows, were also targeted by the spambot. These kids, though, only got (at most) a few hundred spambot followers.
After some investigation, Madrigal uncovered all the fake accounts appeared to be owned by a single business:
Casting 360 is run by Igor Reiant, who previously ran Talent6, a similar casting service that was fined $45,000 by San Mateo County for fraud, as revealed by an anti-scam blogger.
Reiant, according to his Twitter feed, appears to like skiing in Squaw Valley. He has 16 followers. I contacted him to ask him about his company’s marketing tactics, but he has not responded.
Looking into why so many fake accounts from a single malfunctioning spam bot ended up targeting one Californian teenager reveals some interesting things about how social media spam bots work, including the ways in which they take advantage of Twitter’s own APIs.
For anyone with an interest in social media, it’s a piece that’s well worth reading.
Inside the NSA’s surveillance technology
In recent months, the tech industry – particularly in the US – has been reeling from the NSA (National Security Agency) scandal. For those who haven’t, a former agent, named Edward Snowden, has been leaking documents about the NSA’s surveillance capabilities.
Many have viewed the scope of the NSA’s operations, as revealed in the documents, as alarming, creating a potential backlash against US tech companies.
Now, in a feature article in The Verge, T.C. Sottek traces the history of the agency and looks at how the technology in its PRISM program works.
The story begins in the early days of the Cold War, when the US sought to intercept Soviet radio communications:
Signals intelligence, or SIGINT, encompasses the interception of electronic signals and communications intelligence, and was an enormous factor in 20th-century wars and diplomacy… In 1952, President Harry Truman issued a memo which led to the creation of the National Security Agency, consolidating the military’s SIGINT responsibilities.
Following September 11, the NSA’s signals intelligence mission was extended to digital communications as part of the War on Terror:
Following 9/11, the NSA’s ability to gather the communications of US citizens was greatly enhanced by the Patriot Act of 2001, the Protect America Act of 2007, the FISA Amendments Act of 2008, and secret interpretations of US law that have only recently begun to enter the public view — interpretations that have surprised and concerned even the Patriot Act’s original authors.
The most interesting aspect of Sottek’s feature is that it reveals, contrary to popular myths about deals with tech companies, that the NSA’s primary methodology remains intercepting communications:
When PRISM was originally reported, The Washington Post and The Guardian suggested something about the program more sinister than reality: that the NSA had “direct access” to the servers of major email and electronic communication providers. While Google, Facebook, and other companies implicated in PRISM collaboration flatly denied any arrangement that would hand the NSA unfettered access to data, the back-and-forth over direct access was window dressing to the NSA’s much larger effort: upstream collection, which sucks data directly from the internet as it passes through the cables that make up the network.
Why interception is such a valuable tool, however, is because when your data is stored in the cloud, often it isn’t just stored in one physical location:
Unlike documents in a safe or money in a bank vault, your electronic files probably aren’t sitting in one place, especially if they’re handled by global service providers like Google or Yahoo. In the age of global cloud data, it’s possible (even likely) that information you store with a US company is exchanged between servers located outside of US borders.
With more revelations promised by Snowden, and growing concerns about some of the potential risks of cloud-based storage, the US signals intelligence is likely to remain a contentious issue (and a headache for Silicon Valley tech firms) for the foreseeable future.