In the lead up to Apple’s World Wide Developer Conference, one of the most persistent rumours has been that the tech giant is gearing up to launch a smart home initiative.
Over at Forbes, Michael Wolf examines what this is likely to mean for home owners:
Apple’s investment in iBeacon has to this point mainly been around retail, but expect the company to utilize iBeacon in the home in any smart home effort. One of the benefits of the smart home is the contextual understanding of both who and where the consumer is at any given time, and iBeacon will likely be the ‘how’ for this type of smart home intelligence via Apple. Imagine lights turning on, locks locking, garage doors opening based on preprogrammed commands through an iOS device.
To succeed in the smart home space, especially against integrated consumer electronics giants like Samsung and LG, Apple will fundamentally need to change the way it operates:
Get daily business news.
The latest stories, funding information, and expert advice. Free to sign up.
This is a big departure from the past. When you look at Apple’s previous initiatives, they haven’t been hugely welcoming of a broad array of hardware partners. Whether it’s their music initiatives or tablets or TV, they have tended to be the main manufacturer while only really welcoming third party hardware partners in accessories and add-ons (like docs and speakers, for example).
This is not possible with the smart home. Whether it locks, lighting or white goods, there are big players in this space who have established channels and brands that aren’t going anywhere. Apple knows this, and because of this they’ll likely create a partner program build around some lines of code, a few hardware requirements (i.e. Bluetooth LE) and a certification/branding initiative (i.e. “works with iHome”).
While at this stage it’s still all speculation, the idea of an Apple-controlled home is certainly an interesting one to toy with.
Why Microsoft hired its biggest critic
Mark Russinovich is the kind of person every business owner dreads. The online security expert has built a career exposing flaws and security holes in the products of big tech giants.
Cade Metz at Wired explains, Microsoft was a favourite target:
Mark Russinovich was in the business of pissing [Microsoft] off.
This was the late 1990s, when Microsoft dominated the tech world, its Windows operating systems running so many of the world’s computers, from desktops and laptops to corporate workstations and servers. During the day, Russinovich built software for a tiny New Hampshire software company, but he spent his evenings and weekends looking for bugs, flaws, and secrets buried inside Microsoft’s newest and most important operating system, Windows NT. Sharing his findings with the press or posting them to the web, he frequently pissed off Microsoft, but never so completely as the time he exposed Windows NT as a fraud.
Along the way, some of the secrets Russinovich uncovered no doubt would have caused a few sleepless nights for Microsoft’s co-founder and then-chief executive, Bill Gates:
Windows NT represented Microsoft’s future–its core code would underpin the company’s operating systems for years to come–and at the time, it was sold in two flavors. One was for corporate workstations used by engineers, graphic designers, and the like, and the other was for servers. NT Workstation was much cheaper, but, unlike NT Server, it barred you from running web serving software, the software that delivers websites to people across the internet. Microsoft said that NT Workstation just wasn’t suited to the task. But then Russinovich reverse-engineered the two OSes and showed that the truth was something very different. NT Workstation, he revealed, was practically identical to NT Server. It wasn’t that the OS couldn’t run web serving software. Microsoft just didn’t want it to.
And after Russinovich exposed the practice, releasing a tool that let anyone transform NT Workstation into NT Server, the company responded in typical fashion. Days later, when employees from his New Hampshire company flew across the country to participate in a Microsoft event, Microsoft barred them from the building.
What do you do when faced with a problem like Russinovich? In the case of Microsoft, it offered him a job:
But after several more years spent running his Sysinternals site–where he published a steady stream of exposés that, in his words, “pissed off” Microsoft and other tech outfits–he did join the software giant. The company made him a Microsoft Technical Fellow–one of the highest honors it can bestow–and today, he’s one of the principal architects of Microsoft Azure, the cloud computing service that’s leading the company’s push into the modern world.
Metz goes on to discuss how hiring Russinovich has worked out for the tech giant – including some of the challenges that come with employing a former critic.
The death of the save command
Increasingly, apps and cloud-based services automatically save a users’ data. The shift is leading to the gradual disappearance of one of the great conventions of the computer age: The save button.
Over at Medium, Jeff Jarvis recounts a tragedy that has befallen almost everyone who used a PC at some point – forgetting to save an important document:
How often have all we learned — the hard way — the price of not hitting the Save button (in the early days of what we called word-processing) or then Ctrl-S (in Microsoft’s Word)? “Did you save your work?” the unsupportive support guy would scold whenever the machine would lose everything I’d been working on since last hitting those comfort keys. The loss was never the computer’s fault. One was supposed to assume, oddly, that the machine was the fallible one in this relationship — it was destined to crash sometime; you simply didn’t know when — and it was the human’s job to cover for the computer, saving one’s work to save its ass.
For Jarvis, the shift from needing to regularly save documents on your local PC to having documents autosaved mirrors another great shift in writing technology he’s witnessed during his career:
When I started in the business of writing — or what we now call making content — back in the ’70s for Chicago Today, a newspaper that had no tomorrow, then the Chicago Tribune, I was a rewriteman (the job and title both disappeared long before gender sensitivity would have had the opportunity to update it to rewriteperson, rewriter, rewriteist, or live-blogger). Writing on deadline, we’d type on half-length pieces of paper with many carbon copies for our many editors, turning out one paragraph at a time and then yelling “COPY!” (oh, how I loved that), so our words could be edited and then wooshed away by pneumatic tube to be turned into lead, line by line on the Linotype, ready to compose in a page (so many quaint media relics in that sentence).
As painful as the need to constantly save your documents was, it’s easy to forget what a blessing the early word processors were:
Computers changed the way I wrote. Trained as a rewriteman, I’d rush through writing a story as quickly as possible to get the structure in place and include every fact I had so I’d have the comfort of knowing I had a complete article. Then I would use every available second to edit. I came to write by editing. I still do that. I like to get a draft done and then go back and reconsider word choices, no need for WiteOut. I take the luxury of cut-and-paste — without the scissors and glue pot of my formative years — to reconstruct a tale. I cram in another fact or quote. I trim and trim again — a discipline demanded by scarce paper now lost online. And I pause to think by saving. Or I used to.
50 years of Multics
This year marks 50 years since MIT launched one of the great IT research programs of all time: Project MAC and the development of the Multics mainframe computer operating system. Multics is a direct successor of Unix, and many of the ideas form the foundation of modern computer operating systems such as Windows 8.1, Linux, Mac OS-X, iOS and Android.
Over at BetaBoston, Daniel Dern recounts some of the history of the program:
At its peak, Project MAC (which, by different accounts, stands for “Mathematics And Computation,” “Multiple Access Computer,” or “Machine-Aided Cognition”) had around 400 researchers.
Project MAC’s impact went beyond its specific research activities, according to Paul Green, a Multics alum who is currently a senior technical consultant at Stratus Technologies Inc. and a co-organizer of the Multics reunion. “Project MAC led to the start of an official computer science curriculum at MIT. When I joined Project MAC as an MIT freshman in the fall of 1969, computer science was part of the Electrical Engineering department. Now, the MIT’s ‘Course VI’ is the Department of Electrical Engineering & Computer Science.”
Researchers from the program and its successors have gone on to launch some of Silicon Valley’s most iconic tech companies:
Companies founded or strongly driven by Project MAC alums include Digital Equipment Corporation, Prime Computer, Stratus Technologies, Bolt Beranek & Newman, and RSA Data security. According to MIT, CSAIL’s researchers have created over 100 companies , from 3Com, Akamai, and iRobot to ITA Software, Lotus, RSA Data Security, and Thinking Machines Inc.
Technologies developed as part of the project form the bedrock of many everyday computer technologies:
“We see the use of shared resources today even on our desktop computers, tablets and smartphones,” said Paul Green. “Here, not sharing by people but by several applications running on a device at the same time. Multics’ time-sharing concept is how huge numbers of people can be accessing the same web site and even web page, sending electronic mail, or watching a video on YouTube. Multics concepts and programming tools fed into what became Unix and Windows.”
There can be little doubt the development of Multics was one of the most important pioneering events in computer science history. If you’re not already familiar with the history, it’s worth taking a look back in time.