18 January 2016

Silicon Cognition

2016: Silicon Cognition


Several years ago I flogged the term “Cloud-Based Episodic Parallel Processing” and tossed c-bepp.org onto the Internet with a concise manifesto of a vision: a cloud API for applications running on PCs and other puny computers to be called when massive computation (or computation on massive amounts of data) is required.

My focus was applications like personal medicine, where a patient’s symptoms and genomic information (for example, the DNA sequence of their tumor) could be correlated against a vast database of similar sufferers, and the best course of therapy for the particular patient selected based on outcomes amassed over time and the spectrum of therapies that had been tried.

I likened the process to a hybrid car, which carries a modest gas engine of barely sufficient power for light-demand usage but also an electric drive to add bursts of power in high-demand situations.

C-BEPP received limited interest at the time— some interesting conversations, some expressions of interest, a few amens— but ultimately it didn’t gel.  Wasn’t actionable.  The necessary software constructs seemed unapproachable.

But tonight it struck me: in the end, it actually did gel in a sense— with a twist.  Driving home, my watch guided me along the roadways, gently tapping me when turns were necessary: tap, tap, tap for right; tap-tap, tap-tap, tap-tap for left.  It did this by harnessing a sort of episodic cloud-based processing: Here was the puniest of personal computers, drawing intelligence from a distant cauldron of immense compute power.

The twist is that there is an intermediary I had not foreseen: the smartphone.  Powerful computers in their own right by any classical measure, these always-connected, sensor-festooned, multi-networked wonders provide valuable preprocessing and situational information to the distant brain based in the cloud.  My phone interacted with me via the watch, and it corresponded with the cloud-brain, serving as interlocutor and conductor of the orchestra of data to and from my wrist.

In return, the cloud-brain gained insight from observing me: knots of traffic I encountered, my speed, diversions and alternate routes I chose to take, and so on.  Perhaps even biometric information from my watch caught its interest, since that was available too.  Who knows, if I wrenched the wheel sharply, it might even have noted an obstacle in the road.

My eyes were its.

Silicon ganglions

Throughout history, successful innovations have often paralleled biological systems.  Airplanes possess similarities to birds; submarines to fish.  Advanced coatings draw heavily from nanostructures on plant-leaves, unwettable by even the heaviest rain.  Camera imaging chips draw from the design of the retina, and on and on.  It's called biomimetics.

So here we see an emerging compute paradigm similar to the biological structure known as a ganglion: a mini-brain, a peripheral intelligence interposed between transducers in the field and central processors in the cloud.

My phone's role in my evening commute struck me.  In 2016, it is the ganglion in a diffused intelligence in which I am a participant, beneficiary, transducer… and sensor.

Silicon Intelligence

Looking back at 2015, perhaps the most remarkable thing I read was a Google Research blog post entitled “Inceptionism: Going Deeper into Neural Networks” (http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html).

This described recent work in correlative recognition using the specialized compute structure called a neural network.  These systems learn by observing large data sets.  Their predictive capability self-adjusts over time via observation and feedback (http://www.dspguide.com/ch26/4.htm).

And, when idle, they dream.  That Google Research blog post presented otherworldly phantasms imagined by the neural networks when ruminating over image databases.

Haunting stuff, not least because the phantasms seem so brilliantly dream-like.  The stuff of a dawning intelligence.  The stuff of nightmares.

What recognitions, then, might such an intelligence draw if fed something close to the sum of human thought and behavior?  In 2016, I think we will begin to find out, because portals to that exist today.  They are specialized in various ways: consider the databases of a Google, an Amazon, an NSA.  Immense insights await the hungry brain capable of observing it all.

Silicon Consciousness 

There’s more.  The difference between intelligence and consciousness is simply being-in-the-now.  Observing and correlating databased truth is one thing, and it allows inferences and recognitions to be drawn.  But observing and correlating the world as it happens is quite another thing, and this is the natural agenda of Internet of Things as it unfolds, whether its participants are aware of it or not.  This is the stuff of consciousness, not just intelligence.

And, in 2016, it is here.

Silicon Precognition

Philip K. Dick’s 1956 short story, “The Minority Report” (later turned into an engaging movie), centered on mutants capable of foreseeing crime.  In 2016 we need no mutants: we have the germ of silicon consciousness animated by silicon intelligence, leavened by a critical mass of human thought and behavior, and illuminated by a real-time sensorial feed from exponentiating millions of sensor-equipped minions like me, just going about our days.  Given such a vast and dense sense of history and now present, the leap to foreseeing the future is a minor one.

And an inevitable one.

Consider again who those big-data players are.

Consider again the real value of big data in 2016.

Biomimetics, writ large

Throughout history, successful innovations have often paralleled biological systems.  And so the beehive enjoys a collective consciousness, animated and leavened and illuminated by its throngs of worker bees as they go about their day.  Occasionally one will find a promising flower-patch and will convey news of the bounty back to the hive by pheromone and even through coded dance.

And so I arrived home, finding my family freshly returned from a wine-tasting expedition in the hills behind us.  My day’s meetings had gone well; the Pinot Noir they brought home is truly excellent.  The brain in the cloud observes my copacetic pulse-rate, the relaxed sway of my motions, the contemplative tap-tap of my keyboard.

What is my coded dance?





This essay was originally posted on Patricia Seybold's customers.com site.

06 June 2013

Free vs. Freeing

This essay was originally posted on Patricia Seybold's Customers.com.


Drive through the green hills of the Virginia horse country: dotting the rolling landscape are lavish
estates of the truly rich, distant stone monuments to old money overlooking countless acres of beauty and protection. Nearer the cities lie gated communities of the merely well-off, cloisters of fine homes and upscale condominiums onto whose streets only the authorized are admitted by uniformed guards. Then come the jumbled neighborhoods of the common, for whom horses are perhaps something to bet on and cheer if they’re of any consequence at all, the everyday business of existence being more at stake each day.

The stratification has evolved naturally, with the wealthy seeking high and isolated ground of their own, the accomplished erecting pens for themselves, and the rest left to strive as well as they can as they bump and clash their way through their weeks and days, doing what’s obligated and taking what’s offered. Humans have organized themselves in similar ways for centuries: feudal fiefdoms, the beau monde, tribal civilizations, even academic societies naturally organize themselves in this fashion.

It’s nothing new.


The term personal computer is now precisely three decades old, but only recently have computing devices begun to truly shade within the penumbra of the person. A beige box on a desk was, in retrospect, hardly personal. A location-aware computing and communications device in one’s pocket is more so—always on, sense-enabled, and always connected to the new compute, data, and social resources of the world—and it has begun to influence, define, and modify personhood. Wearable computers are next: after that lies conjecture, but it is certain that devices will fuse ever closer to the essence of the person even as their resources and mechanisms evanesce to the cloud.

It’s unremarked, but of little wonder, then, that a similar stratification is emerging among these more-truly-personal devices. From the roiling primordial bog of silicon and copper that incubated technology for 30 years have emerged devices made increasingly in our own image: they help us see, help us remember, help us chatter and enjoy, help us earn and retain wealth—both the old kind, denominated in numbers and influence, and a new kind denominated in gnostic power.


For those affluent in this new way, there will be self-contained estates, separate and secure. For them, information is something maintained aloof and stoic and distant, and, most of all, detached. These elite of the new elites will build their own unapproachable stone monuments to their uncommon mastery of the wisdom of this age. They will loft their own clouds, because they can.

For those merely well-off in this new way, there will be gated and protected neighborhoods, their assets secured by paid patrols and uniformed guardians. For these burghers of the new era, security is something hired; privacy is a matter of dues and subscriptions; trust a matter of fealty.

For the common—those who participate without influence, who build without architecting, who use without grasping, who exalt but don’t know—the new age offers bounties, but wealth of the new sort is not theirs. Instead, for them lie days of labor leavened by circuses, their comings and goings predictable and, moreover, predicted, by their overseers, who ladle out a benign daily porridge without obvious cost.

Democracy, exult the common: behold the flattening; let no one hunger for bits and bytes and presence, and all for free.

Order, smile the bourgeoisie, secure in their pens, overwatched by guardians.


Liberty, sigh the lords, separate and free.

--

30 July 2012

Here's a clue for critics of Apple's "Genius" ads

What should one advertise?

Hint: There's an old saying, "If you've got it, flaunt it."

And that's what Apple is doing with its new Genius ads, in heavy rotation during the Olympics.  Personally, I find them a bit annoying, but that's the point: they're not for me.

They're for computer users who have spent 45 artery-bursting minutes listening to horrible music, waiting for an incomprehensible, script-reading troll working a phone bank in a far-off land to not solve their problem.

They're for other companies' customers, in other words.

In advertising, you spotlight your differentiations.  And a key reason to buy a Mac (or an iDevice) today is that you can make an appointment at the local Apple Store and talk to someone who speaks your language natively and wants to help.

Apple has Apple Stores.  Apple has the Genius bar.

Other computer manufacturers don't.

Differentiation.  That's what this is about.

22 February 2012

An industry pundit whiffs the ball

Like most techie types, I watch Apple's turnaround into the world's most valuable company with wonderment and admiration.  It helps that I'm a fan and user of their products from the Macintosh 512 days.  I've had great luck with Macs and iDevices and have enjoyed unbelievably good support from Apple itself.  And there's not enough popcorn in the world to feed my fascination as the company once again re-invents the notion of personal computing right before our very eyes.

In the limited time I have for what's become my favorite spectator sport, among my top go-to blogs for matters Apple is John Gruber's Daring Fireball blog.  It's rare he gets industry happenings wrong.  Today was such a day.

Lately there've been some rumors about Microsoft Office finally coming to the iPad.  Great, say I.  Doubt I'd be a customer for it, but it's clear many businesses would be.  Gruber's been watching too, and he notes MG Siegler's fascinating conjecture that Microsoft's fidgety non-denial denial of Office-on-iPad might be explainable if they're to be a part of the iPad 3 launch anticipated in a couple weeks:
But what would be in it for Apple to offer such a spot to Microsoft? You can argue that the iPad with Office available is an even more attractive platform/device than the iPad as it stands today, sans Office. But why share the spotlight with Microsoft? Apple doesn’t need to. The only other tablet computer with any traction in the market is the Kindle Fire — and the Fire is not competing at all in the business productivity market that Office for iPad would target. Android tablets don’t need to be shot down — they still haven’t gotten off the ground. Why give credence and attention to Microsoft in a market where so far Microsoft has had no success?

There are several things wrong with this line of argument.

First, despite their functionality, cost-effectiveness and security, Apple's products today are still foreign to the business world and viewed with some skepticism by management.  A bona-fide office suite would handily address that, hastening the erosion of the IT acceptance-wall that is already occurring.  So it's not at all a given that, as Gruber goes on to say, Microsoft would get more out of it than Apple.

Second, innovation occurs at intersections.  It is beneficial to the iOS ecosystem for Apple to encourage the opening of portals to new cultures, ideas and usages.  

Third, as Steve Jobs said in much different circumstances, Apple needs to let go of the idea that in order to succeed Microsoft must fail.  Shared destiny is a great motivator for all concerned.

Fourth, Apple's spotlighting of app developers and industry partners in its product events is a handy reminder that iOS's ecosystem is instrumental to its success as a platform.  In fact, the announcement of the App Store circa the first anniversary of the iPhone marked a significant upward inflection point in its sales trajectory.  Microsoft should be welcome on that stage.

Lastly, let's face it: Apple's own office suite (iWork) is probably going to look all the more impressive by comparison.

I hope it happens.

===

UPDATE: Gruber has some fresh thoughts on the matter.


07 January 2012

Another Reprieve for Moore's Law

Current semiconductor line widths are pushing 20nm, or less than a dozen copper atoms wide.  But just as pinching a hose reduces its flow, the narrowing of current traces on microchips has suggested the impending end of the exponential increase in integrated-circuit densities known as Moore's Law.


Not so fast.  As reported in "Ohm’s Law Survives to the Atomic Scale" in Science v. 335 n. 6064, interconnects with the current-carrying capacity of today's copper traces can be formed by dotting four-atom-wide silicon pathways with phosphorus atoms:
We report on the fabrication of wires in silicon—only one atom tall and four atoms wide—with exceptionally low resistivity (~0.3 milliohm-centimeters) and the current-carrying capabilities of copper. By embedding phosphorus atoms within a silicon crystal with an average spacing of less than 1 nanometer, we achieved a diameter-independent resistivity, which demonstrates ohmic scaling to the atomic limit. 

Illuminating reporting is also available at Scientific American and Gizmodo.

Fascinating-- not only does the new technique offer a fresh order-of-magnitude for the progress of Moore's Law, it's nonmetallic!

25 December 2011

Atheists will have to do better than this

Christmas Day musings...

A nice little e-flurry has built around several bloggers' trading of a provocative comment from a book by Penn Jillette:

"There is no god and that’s the simple truth. If every trace of any single religion died out and nothing were passed on, it would never be created exactly that way again. There might be some other nonsense in its place, but not that exact nonsense. If all of science were wiped out, it would still be true and someone would find a way to figure it all out again."

Sigh.  Where to begin.

Well, for starters, an exactly parallel construct would be: If every trace of chalk written on a blackboard were to be erased, it could never be written-on that way again.  Which presupposes that whoever wrote on the blackboard wouldn't come and do it over.  Since Jillette's faith holds that there is not and never was a Creator, that makes sense to him.  The chalk, per his faith, was written by men.  And my use of the term "faith" is grounded by Jillette's reliance on such words as "never" and "simple truth."  On inspection, his logic is manifestly circular and self-referential.  (Or is it self-reverential?  [Chuckle] see what I did there).

Meanwhile, science--Jillette's unerring anti-theist lodestone--is hardly canonical across time or place.  For example, dial the calendar back a few years, and a conjecture that gastric ulcers might be caused by microbes would be met by hoots and derision.  How silly!  Bacteria that could survive the acid environment of the upper gut!  There was a day, not long ago, when a researcher proposing such foolish heresy would be laughed out of town, and out of his career.  That's close to what happened to Barry Marshall, who shared the 2005 Nobel Prize in Medicine with J. Robin Warren for that specific apostasy.  There are others: Prusiner and his infectious proteins is another good example.  In the physical sciences, the creative moment documented by the 3K microwave background radiation, so evocative of Genesis, was hugely uncomfortable for scientists to accept.  Years before, Einstein himself had striven to adjust his cosmology specifically to avoid such an event, believing (there's that pesky faith again) that the universe must be unchanging forever (and there's that word, an expression of accepting belief if there ever was one).  As another example, may I mention cold fusion?  The avalanche of mockery that met Pons & Fleischmann's premature publicity crushed what just might have been a spectacular technology in its infancy.  Though the phrase "cold fusion" remains a punchline, much about it remains unexplained, and sober minds are daring to propose a second look.

Here in the nanotech field, the miraculous is observed every day.  As the field advances, each layer of the onion peels away to reveal more onion: more unknowns about nature, and perhaps more unknowables.  Jillette's declaration against the divine is regrettable, as it relegates science to monotonic crank-turning.  On this special day, may we reflect on the scientific value of wonder and awe, and continue proposing foolish heresies.

03 December 2011

No, thank you MAM... did open-source kill Sun?


A few years ago I was given a Sun Ultra 2 3D Creator Sparcstation, vintage 1999, with a then-whopping 1.5GB of RAM, two hard disks and two Sparc processors.

Whoo!  Serious iron.  Weighs a metric ton.  Real hairy-chested UNIX.  Hisses and spits when it runs.

But, it lacked a monitor, and playing with it over RS-232 wasn't too much fun, and I lacked a user account so there wasn't much to do with it.  It sat on a shelf as part of my collection.

Recently I was offered a huge Sun CRT monitor, and it turned out to be compatible.  (And easily 125 pounds.)

So I set it up.

It still left the issue of lacking a user account.  Without system disks, it was impenetrable.  But what I saw was purty...

So I decided to burn a bunch of Solaris 10 update 7 CDs one evening, and start from scratch.  I think the thing had been running Solaris 7 or 8.

Well.  Turns out, with Solaris 10, Sun was deprecating its previous GUI, the Common Desktop Environment (CDE), in favor of Gnome, with which I'm familiar from my Linux usage.  Not a fan of Gnome... so I tried CDE.

And CDE is awful.  It's not what the machine was running before, which seemed airier and more responsive and a whole lot less clunky.

Meanwhile, Gnome is... Gnome.  It's hard to imagine someone spending what this machine originally cost and feeling satisfied with that environment.  It's just nasty.

And despite its great honking 10,000 rpm SCSI disks, two 300MHz 64-bit Sparc processors, and a clock-doubled S-Bus architecture, the thing's a damn slug.

Just abysmal.  I could not be less impressed.

I call it my Mac Appreciation Machine.  Howdy, MAM.

I wonder how the move to Gnome factored into the demise of Sun.  Premium machines, hard-core.  Costly.  Not for home use.  Not for Aunt Min.  Heck, its noises alone would give her the flapping vapors.  No, it's a top-drawer tool for serious professionals.  Yet there it is, glaring at me with the same unpolished face as some crappy netbook running Ubuntu.  Complete with Star Office, seemingly identical to the open-source OpenOffice.

Both Sun and Apple, with OS X's NeXTSTEP-based innards, leveraged the open-source BSD UNIX as their foundations.  In Apple's case, the generic/open-source-y inner UNIX giblets are cloaked with a sublime and solid proprietary user interface with lots of unique and thoughtful goodies built in.  Nothing of the sort with the Sun, at least with Solaris 10u7.  Interface- and usability-wise, I see nothing here I couldn't get from Mandriva or Mint for free, today and maybe even back in 2001 when Sun first started edging towards Gnome, and certainly by 2007-2008 from whence this version of Solaris sprang.

Though it remained (and remains) well regarded in the server space, Sun summarily disappeared from desktop usage, and I wonder if Gnome was a symptom or a cause.

My thought: as in every business endeavor, differentiation is everything.  Whatever other problems Sun was battling in the market, it also lined up a chunk of its differentiation carefully in the cross-hairs and blew it away by adopting an open-source persona for its machines.

It seems my thoughts both parallel and oppose those of Scott McNealy from exactly a year ago.  On the one hand, the interviewer refers to the "open core" of Sun's products, which could have excluded the user interface.  And McNealy notes, "We probably got a little too aggressive near the end and probably open sourced too much and tried too hard to appease the community and tried too hard to share... You gotta strike a proper balance between sharing and building the community and then monetizing the work that you do... I think we got the donate part right, I don't think we got the monetize part right."

But he doesn't mention differentiation.  And if McNealy & Co. were prescient in stating that The Network Is The Computer, maybe they missed appreciating that The Interface Is The User Experience.  And on MAM, with Gnome, that's nothing special.

10 October 2011

What he left behind

This blog is supposed to be about nanotechnology, capitalism and innovation.  Almost all of my posts have related to the first or the last topics-- precious few have related to the one in the middle: capitalism.  And that's a shame, as capitalism bridges the two and has given us so much.

The past few days saw the passing of the most successful capitalist in generations.  His was a classic story: Steve Jobs started with nothing: a castoff child, a dropout.  His career took some wrong turns, but evidently he learned from them, and he built (and re-built) what is currently the most valuable company on Earth.

Somehow his passing seems to have affected people more than most executive deaths.  It's quite the phenomenon.  After all, Apple recently nudged Exxon into the #2 position, and who remembers the founders of Exxon?


I think it has to do with a sense of generational loss: In Jobs we saw the leader we wish our leaders were more like.  We saw the visionary who made dreams happen, who defined the fresh and graceful.  We saw the guiding big brother who showed us how it's done.

That's what's gone now, and folks are feeling it, though maybe without the words.  But it's there.  It's there in the shrines of flowers, the candles, the yellow-stickied goodbyes, and the apples-with-a-bite-missing arrayed in front of Apple's retail stores.  It's there in the paeans of pundits around the world, the front-page photos, the presidential condolences and the online appreciations.  It's there.

Remarkable.

Jobs had a lot to say about death in his Stanford commencement speech, which I urge you to watch.  He was one of the few captains of industry fearless enough to philosophize and talented enough not to make a laughingstock of himself doing it.  

There's something that lives on, then, and it's much more valuable than all the AAPL shares in the world: like a good big brother, sadly departed, he left behind inspiration.  And inspiration is where advancement starts under our system.  It underlies the willingness to risk, the drive to succeed, to make something of nothing.  It underlies entrepreneurship.  Defines it.  Defines ambition, animates hope.

We need more of that.  Its loss is what makes us weep.

One thing that was unique about Jobs was the breadth of his accomplishments.  Most historic entrepreneurs are content to build one business or one industry.  To have built and revolutionized so much across so many fields takes a special kind of gift, and not just as an inventor: as a team-builder, a network-spanner, a persuader and communicator and negotiator and all those other things.  Consider:
  • The personal computer (Apple II)
  • The personal computer, again (Macintosh)
  • The laptop (PowerBook)
  • The personal computer, a third time (NeXT)
  • Animated feature films (Pixar)
  • Personal media players (iPod)
  • Music distribution (iTunes is now the world's largest media store)
  • Software distribution (the App Store)
  • Computer retailing (Apple Stores)
  • Personal computers, iteration 3.5 (Mac OS X, the commercial second coming of NeXT)
  • Cell phones (iPhone)
  • Personal computers, round 4 (iPad)...
...And that's just off the top of my head and excludes the monumental turn-around of Apple, Inc as it twitched and gasped in extremis after his twelve years wandering in the desert, its worth less than its cash in the bank.  

In none of these was Jobs the first-mover, but he brought a unique business approach to bear.  For example, Apple did not invent the mouse-- that original implement for driving a computer's graphical user interface.  He licensed what he needed from Xerox, paying with Apple stock.  Xerox's mouse cost hundreds and hundreds of dollars.  Apple's retailed for $29.  

It's too pat to sniff, "Well, he didn't invent it, he just commercialized it."  Exactly!  That's the very essence of meaningful innovation.  Edison didn't invent the first light bulb, either, as Joseph Swan demonstrated in court.  But until Edison came along, the light bulb didn't happen.  Edison made it happen by nailing the details and building the teams and establishing the networks that made it affordable, made it reliable, made it marketable, made it supportable by the necessary infrastructure and ecosystem, made it comprehensible and accessible to the common man... 

See the difference?  It's the difference between the inventor and the entrepreneur.  

Then look at the great entrepreneurs, the Fords and the Perots and the Hewletts and Packards and Varian brothers and the Larry Ellisons and Michael Dells and Bill Gates and... well, the list goes on and on, and they all did basically one or perhaps two startups, or shepherded one or perhaps two revolutions, or built or flipped maybe one or two technologies, bless 'em all.  The likes of Steve Jobs, on the other hand, come once a generation.  Besides Edison, only Howard Hughes comes to mind as a serial entrepreneur of such diverse accomplishments in the past century.  They're exceedingly rare.

Of course, there are detractors.  Haters, even.  Jobs was a tyrant, some will tell you, a prick, brutal and dehumanizing to his targets.  Sure.  That's almost a given among historic entrepreneurs.  Edison was a prickly, mercurial, arrogant, claim-jumping son of a bitch.  Henry Ford wasn't Mr. Nice Guy either, and a raging anti-Semite on top of it.  Hughes was a psychological basket case who trusted no one.  Historic-class entrepreneurs are odd, even damaged, and pretty much uniformly not-nice, at least in their business arena.  There, niceness is not what it's about.  Gladiators tend not to be nice in the ring.

But that's not their entirety.  In Jobs' case, I never met him but have some mutual friends, and they're pretty ripped up.  He was, after all, rather young, with kids still at home and a lovely, kind wife he adored.  By all accounts (and see this charming Quora thread for some of them) he had a sort of quiet, out-of-the-spotlight charity and was a devoted family man.  Personally, I'll miss him for his his exemplification of so much that is uniquely American, for his unabashed philosophizing, and of course for his cool products.  But there are his family members and many friends who really mourn him, because Business Steve and Personal Steve were different people.  My heart goes out to them in their loss.

Personally, I believe our Creator gave each of us unique gifts.  And just as we are disappointed when things we give to others sit on a shelf unappreciated, I believe we disappoint God when we fail to use our gifts to their utmost.  Jobs and I are of different faiths, but I think we'd agree on that.  Certainly we disappoint ourselves, down deep, when we play it safe, notch it down, avoid risk and otherwise keep those gifts in their wrappers.  But rarely, we encounter a person who uses his gifts to the hilt, each and every day.  Such a person leaves this world with fewer regrets than most of us, I suspect.  And though their passing pains us most, it should grieve us least, because their example--their inspiration--will feed the next generation of their kind.

Like a guiding big brother, Steve Jobs showed us how it's done.  And the world--and the future--is a better place for it.

22 February 2010

Fabricating transistors without those pesky junctions and dopants

Now this looks interesting. EETimes relates some fascinating research at the Tyndall National Institute in Cork, Ireland: The current flow in nanowires can be pinched off by a simple, conductive nanoscale surrounding structure. According to the EETimes article:

The breakthrough is based on the deployment of a control gate around a silicon wire that measures just a few dozen atoms in diameter. The gate can be used the squeeze the electron channel to nothing without the use of junctions or doping. The development, which could simplify manufacturing of transistors at around the 10-nanometer stage, was created a by a team led by Professor Jean-Pierre Colinge and a paper on the development has been published in Nature Nanotechnology.
It simplifies the production of transistors which also have a near-ideal sub-threshold slope, extremely low leakage currents and less degradation of mobility with gate voltage and temperature than classical transistors, the researchers have claimed. Nonetheless such device can be made to have CMOS compatibility...
"We have designed and fabricated the world s first junctionless transistor that significantly reduces power consumption and greatly simplifies the fabrication process of silicon chips," declared Tyndall's Professor Colinge...

Between these devices, graphene technology and memristors, it would seem that a whole new chapter in integrated-circuit fabrication is in store for the next few years. Somewhere, Gordon Moore smiles.

20 February 2010

Billiard photonics

In another fascinating post to Ars Technica, Chris Lee discusses the leveraging of scattering to improve resolution in a microscope. A more counterintuitive thing is hard to imagine, but he explains...

Scattered photons can make for an improved focus

...a few years ago I reported on a very cool experiment, one that allowed researchers to get a nice focusable beam of light through a scattering medium, such as a sugar cube. This work has been continuing, and there are some technical differences in how the experiment works, but the concepts are still fundamentally the same.
Laser light is shone on a scattering sample, and a tiny fraction of this leaks through, but goes in every direction. To improve the transmission, the researchers place a liquid crystal matrix between the laser and the sample. They modulate the settings on each pixel of the liquid crystal, varying the amount of light transmitted and the effective thickness of each pixel.
Before the liquid crystal, the light beam is a nice smooth thing: the crests, called phase-fronts, of the electromagnetic waves form smooth curves across the profile of the laser beam, as does the intensity of the light. After the liquid crystal, the beam is a complete mess, with the phase-fronts forming some jagged pattern.
It just so happens that the phase-fronts can be chosen so that they exactly compensate for the presence of the scattering sample. This allows some small fraction of the light to pass unhindered through the sample, as if neither the liquid crystal or the sample were there.
Unfortunately, you never really know in advance what the phase-fronts need to look like in order to compensate for the scattering. So you place a CCD camera just after the sample, and then adjust the LCD pixels until you get as bright a dot as possible on the camera sensor. It turns out that this is easy—you just need to take each liquid crystal pixel and adjust it until maximum brightness is achieved. No iteration is needed.


(...Seems to me that if the characteristics of the scattering medium could be more predictable and consistent than the sugar-cube example, the diffraction pattern could be pre-determined. If so, instead of an LCD array, a calculated or even printed pattern could be used. Perhaps the whole optical train could be diffractive, basically a modified zone-plate configuration. --S.J.)



This approach is quite flexible, because it turns out that you can turn the scattering sample into a lens. Just adjust the liquid crystal pixels until the smallest, brightest dot possible turns up, and you have a lens with a focal distance equal to that of the distance between the sample and the camera.
You can also use this technique to improve the resolution of an imaging system, a technique called structured illumination microscopy. Basically, you distort the phase-fronts so that you get multiple sharp points or lines of focus. Each point of focus is, at best, a factor of two better than achievable with ordinary light. But, a factor of two is better than a poke in the eye with a sharp stick, so we'll take it.
What the researchers in the Nature Photonics paper report is an example of structured illumination microscopy, but they claim a factor of ten improvement over the diffraction limit....


While Lee ultimately figures that a factor-of-two is a likelier improvement for resolution, it seems there may be further seductiveness to the technique. Thinking wildly, perhaps objects embedded in scattering media might be observable using some offspring of this research. That could enable applications ranging from oceanic imaging and sensing to in vivo biological imaging. And what does it say about whether scattering is as randomizing as is ordinarily assumed? Are there quantum-entanglement and cryptographic consequences? I've often wondered if we humans have the feeblest grasp of that thing called "randomness"... what we think is random usually isn't at some level, and our proudest efforts to generate randomness without recourse to natural phenomena are really pretty feeble. I wonder if this research may lead to further humbling in that way.

Gold finger: photonics galore

Chris Lee of Ars Technica provides an insightful analysis of a significant new capability in photonics. What impresses me most about this is how well the stage is set for further development and then industrial implementation through recombination with rapidly emerging industrial technologies. I've clipped a summary below and indicated one such synergy, but you should read the whole thing.

Making an optical switch by drawing lines in gold

...Normally, metals make for horrible nonlinear optics—in part because metals fail at transparency—but, they do have one advantage: lots of free electrons. If you shine light on the metallic surface in just the right way, then the electrons start to respond to the light field by oscillating in sympathy. This oscillation moves along the surface of the metal as something called a surface plasmon polariton—which is jargon for electrons that set up an oscillation that maintains a spatial orientation on the surface of the metal.

These plasmons travel at a much slower speed than light, have a much shorter wavelength, and are confined to the metal's surface. As a result, the electric fields associated with the charge oscillation are quite intense—intense enough, in fact, to drive nonlinear interactions. As a result, metals provide light with quite a good medium for things like four-wave mixing, provided you can get the light into the metal. Plasmons are made for this job because they are essentially light waves traveling along the surface of a metal.
[The researchers] ruled lines on the metal that were about 100nm in width...

(...a barn-door for nanoimprint lithography --S.J.)

and separated by around 300nm (center to center). These lines act to slow down the waves, with the delay depending on how close the plasmon wavelength is to the separation of the lines. This also controls the direction of the emission. Light only exits the structure at the point where the emission from each individual line is in phase with the rest, which depends on the spacing of the lines. But, not every color can find a spacing for which this occurs. In this case, the second of the two emitted waves can't find an emission angle that works for it at all, so the device emits only a single wavelength of light.
So, the end result is that, by ruling lines on the gold surface, you can choose which of the two colors you want generated and which direction it's emitted in. As an added bonus, the lines provide sharp points on the surface, which accumulate charge, resulting in very high electric fields (think of a lightening rod on a building). As a result, the four-wave mixing process becomes more efficient.
What's next? That's hard to say. I know that there are some ideas about how these nonlinear optical processes can be made more efficient, and maybe even useful, using plasmonic surfaces. So we may see some plasmonic optical switching devices. The big selling point in plasmonics is usually sensing, though, so things may go in that direction.

One thing that intrigues about this approach is how it leverages and manages those surface plasmon waves. Note the point Lee mentions about the plasmon waves' shorter-than-light wavelengths, necessitated by their comparatively slow velocity. Their short wavelengths would seem to be a useful property for probing and sensing phenomena and physics on the nanoscale, perhaps providing a new tool to bridge the region between light-based microscopies and electron microscopies, which are limited by the electron's Compton wavelength (on the order of 10^-12 m) in the same way that optics are diffraction-limited by photons' wavelengths. In particular, materials with even slower surface plasmon velocities but still generous free electron populations (which I'd imagine equates to "conductivity," though I'm rusty) would have shorter plasmon wavelengths still. Depending on how far down the process can be driven, things could get really interesting, and really weird.

Single. Best. Customer. Experience. Ever.

Here's a quick note of appreciation to Apple for their extraordinary effort to rectify a minor but recurring annoyance with my beloved, well-traveled, hard-working original-issue MacBook Pro. The details are too long and boring to post, but the elevator summary is: I went to the Los Gatos Apple Store for an appointment to visit the "Genius Bar" with a software question. This is always a treat-- imagine talking face-to-face with folks who actually know their products and can communicate effectively, rather than spending hours on hold to Bangalore waiting for an incomprehensible script-reading troll to not solve your problem. Customer service: what a concept. While there, I almost off-handedly mentioned a hardware issue with the machine, which the justly-termed Genius, Trevor, recalled addressing before when my machine was under warranty. He encouraged me to call AppleCare, although my extended warranty had expired last Summer. Within an hour, Apple had spun up an amazing effort to get to the bottom of the issue. Noel at AppleCare could not have taken better care of me and the resolution could not have been more perfect.

It got me thinking. I first used Macs early in my career to run the first versions of LabVIEW, back in my days at Newport Corporation in 1986 when the Mac was the only GUI machine in town. We accomplished some amazing work on those groundbreaking machines, including:
  • Devising an easy-to-use quality-test workstation which made 100% graphical, six-degree-of-freedom interferometry a tool that any production-line assembler could use. This precipitated an avalanche of assembly tweaks and quality improvements from curious assemblers eager to apply their weekend shade-tree-mechanic skills. All these years later, I remain awed and grateful for their gumption and creativity, and at the role LabVIEW and the Mac played in enabling it.

  • Turning around the company's motion-control business segment and turbocharging its instrumentation product lines as the industry's first adopter of National Instruments' brilliant instrument-library initiative for LabVIEW. To show this off, we bravely built a simple virtual instrument from our company's optical hardware and put it on the trade-show floor of the next major conference. It was mobbed. I recall standing in the back of the booth with my boss, the much-missed Dean Hodges, and observing a stunning phenomenon: customers over 35 were looking at the optical hardware, while customers under 35 were looking at the Mac. In a stroke, the Mac had helped us open a highly differentiating dialog with the up-and-coming Young Turks of the laser/electro-optic industry. (An exercise for the user: what would generate a similar effect for your company today?)

  • Building a thriving systems business on technical innovations that LabVIEW on the Mac let us explore quickly and with little risk. The keystone was my work on the first digital gradient search algorithm for nanoscale alignments of optical fibers, waveguides, etc. From idea to first demonstration of that took only a few hours thanks to LabVIEW's dataflow programming paradigm which the Mac's GUI enabled. Eventually we received the very first patent for a LabVIEW-based virtual instrument, and a multi-million-dollar business grew from that seed.
It was an enthralling time, and I even was honored to be subject of a photo interview on "Science and the Mac" in MacWorld magazine. But eventually, Microsoft came out with Windows, and by its version 3.1 there was a version of LabVIEW for it. While my family kept using Macs at home, the world went Windows. And soon we saw how monocultures are a bad thing, both for customers (who are deprived of competition-driven advancements) and for security (having a single, badly-designed lock to pick makes the job easy for the bad guys).

Today, Apple and the Mac are surging again, propelled by superb, imaginative products that are meaningfully differentiated by great performance, compelling design, unmatched solidity and a high-end focus. And please add nonpareil support to that, thanks to a corporate culture that grows Trevors and Noels.

And here, in a ZDNet blog story by David Morgenstern, is terrific news: for scientific and engineering fields, the Mac's story is coming full circle:

Engineering: The Mac is coming back



Most attendees at the Macworld Expo in San Francisco this week — distracted by plentiful iPhone apps, whispered tales of the forthcoming Apple iPad, and the sight of dancing booth workers with their faces covered by unfortunate costumes of gigantic Microsoft Office for Mac icons — may have overlooked a trend: The Macintosh is back in the engineering segment.
Engineering, which was often lumped into the beat called “SciTech,” once was strong segment for the Macintosh. Then in the early 1990s, the platform’s position was weakened and then lost. But now the Mac appears poised for a strong return.
...

“Engineering is primed to take off now [on the Mac],” said Darrin McKinnis, vice president for sales and marketing at CEI of Apex, NC. He said there was a “growing ecosystem of applications” to support Mac engineers and while previously, many engineers purchased Mac hardware to then run Linux applications or even Windows programs in virtualization, his company had seen increasing demand for a native Mac version.
McKinnis pointed to a number of engineering teams around the country that are now almost all working on Macs. With the native Mac apps, the loser will be Linux, he said.
McKinnis has a long (and painful) history with engineering solutions on the Mac. He was once an engineer at the NASA Johnson Space Center in Houston, where in 1995, CIO John Garman decided to eliminate “unnecessary diversity” and switch thousands of Mac workstations over to Windows 95.
The battle was joined between NASA’s directive at the time for Better, Faster, Cheaper,” and what Garman dismissively called “Mac huggers” (a techno-word-play on the “tree huggers” environmentalist sobriquet). It didn’t help that Garman was mentioned in a Microsoft advertisement that thanked customers for their “contributions” to Windows 95.
NASA Mac users tried hard to point out that this policy would cause problems. My MacWEEK colleague Henry Norr wrote a series of articles about the fight to keep the Mac at NASA, which won a Computer Press Association award. Here’s a slice of his Feb. 12, 1996 front page story:
“Making me take a Pentium is like cutting off my right hand and sewing on a left hand,” said a Mac user at NASA’s Johnson Space Center in Houston who recently faced forced migration to Windows. “I’ll learn to use the left hand, but there’s no doubt my productivity is going to suffer, and I’m going to resent it.”
To this engineer and hundreds of other Mac users at the space center, such desktop amputations hardly seem like an effective way to comply with agency administrator Dan Goldin’s much-publicized motto, “Better, Faster, Cheaper.” To them, the space center’s new policy of standardizing on Windows is wasteful, unnecessary and infuriating, and they are not taking it lying down.
Eventually, the fight went to hearings at the Inspector General’s office. McKinnis was one of the staff who testified there. While the investigation concluded with a report that sided with the Mac users, the Mac was supplanted.

No more. The Mac's architectural advantages in performance, security, robustness and ease of use are attracting users snake-bit by the malware, misbehavior and cumbersomeness of Windows and the chaos and geek-intensiveness of the Linux world.

And then there's Trevor and Noel.

14 February 2010

[Yet more] Researchers make faster-than-silicon graphene chips

Egad. A few hours after posting the graphene news from IBM, I encounter this apparently parallel development:

Researchers make faster-than-silicon graphene chips
updated 06:35 pm EST, Wed February 3, 2010
Penn State finds method of making graphene chips

A carbon semiconductor called graphene could replace silicon in computer chips in the near future, researchers at Penn State found. They claim to have developed a way to put the graphene on 4-inch wafers. The Electro-Optics Center Materials Division scientists say their work can eventually lead to chips that are 100 to 1,000 times faster than silicon.



Graphene is a crystalline form of carbon that is made up of two-dimension hexagonal arrays which is ideal for electronic applications. Attempting to place the material onto sheets using the usual methods turns them into irregular graphite structures, however. David Snyder and Randy Cavalero at Penn State say they came up with a method called silicon sublimation that removes silicon from silicon carbide wafers and leaves pure graphene.

A similar process has been used for graphene before, but the EOC is the first group that claims it has perfected the process to a point that lets them produce 4-inch wafers. The smallest wafers using a more conventional method have resulted in 8-inch graphene wafers. Typical wafers used for processors today are roughly 11 inches across. [via EETimes]

A Quantum Leap in Battery Design?

A Quantum Leap in Battery Design
Digital quantum batteries could exceed lithium-ion performance by orders of magnitude.
A "digital quantum battery" concept proposed by a physicist at the University of Illinois at Urbana-Champaign could provide a dramatic boost in energy storage capacity--if it meets its theoretical potential once built.
The concept calls for billions of nanoscale capacitors and would rely on quantum effects--the weird phenomena that occur at atomic size scales--to boost energy storage. Conventional capacitors consist of one pair of macroscale conducting plates, or electrodes, separated by an insulating material. Applying a voltage creates an electric field in the insulating material, storing energy. But all such devices can only hold so much charge, beyond which arcing occurs between the electrodes, wasting the stored power.
If capacitors were instead built as nanoscale arrays--crucially, with electrodes spaced at about 10 nanometers (or 100 atoms) apart--quantum effects ought to suppress such arcing. For years researchers have recognized that nanoscale capacitors exhibit unusually large electric fields, suggesting that the tiny scale of the devices was responsible for preventing energy loss. But "people didn't realize that a large electric field means a large energy density, and could be used for energy storage that would far surpass anything we have today," says Alfred Hubler, the Illinois physicist and lead author of a paper outlining the concept, to be published in the journal Complexity.


That's all quite interesting, and heaven knows the world needs to finally advance beyond battery technology that Volta could understand. Going directly to Hubler's paper yields the following explanation, spanning pgs. 3 and 4:

Nano plasma tubes are generally forward-biased and if the residual gas emits visible light, they can be used for flat-panel plasma lamps and flat panel monitors... The energy density density in reverse-biased nano plasma tubes is small, because gas becomes a partically ionized, conducting plasma at comparatively small electric fields... In this paper, we investitage energy storage in arrays of reverse-biased nano vacuum tubes, which are similar in design to nano plasma tubes, but contain little or no gas... Since there are only residual gases between the electrodes in vacuum junctions, there is no Zener breakdown, no avalanche breakdown, and no material that could be ionized. Electrical breakdown is triggered by quantum mechanical tunneling of electrode material: electron field emission on the cathode and ion field emission on the anode. Because the energy barrier for electron field emission is large and the barrier for ion field emission even larger, the average energy density in reversed-biased nano vacuum tubes can exceed the energy density in solid state tunnel junctions and electrolytic capacitors. Since the inductance of the tubes is very small, the charge-discharge rates exceed batteries and conventional capacitors by orders of magnitude. Charging and discharging involves no faradaic reactions so the lifetime of nano vacuum tubes is virtually unlimited. The volumetric energy density is independent from the materials used as long as they can sustain the mechanical load, the electrodes are good conductors, and the mechanical supports are good insulators. Therefore, nano vacuum tubes can be built from environmentally friendly, non-noxious materials. Materials with a low density are preferable, since the gravimetric density is the ratio between the volumetric energy density and the average density of the electrodes and supports. Leakage currents are small, since the residual gases contain very few charged particles.
The thing is is, I think something much like this has been tried, though not for energy storage: The technology Hubler describes seems very similar to the notion of field-emission displays and surface-conduction electron-emitter displays, two closely-related technologies in which nanoscale vacuum tubes are fabricated microlithograhically and arrayed to stimulate phosphors.




One of Silicon Valley's largest failed ventures was Candescent, a company devoted to developing such displays, which burned through (IIRC) something like $600 million in funding from some stellar sources. As Daniel den Engelsen notes in his article, "The Temptation of Field Emission Displays,"
...Manufacturing of FEDs is too difficult, and thus too expensive; moreover, the recent success of LCDs and PDPs as Flat Panel Displays (FPDs) for TV is now discouraging (large) investments in FED manufacturing facilities. The two main challenges for designing and making FEDs, viz. high voltage breakdown and luminance non-uniformity, are described in this paper. Besides improvements in the field of emitter and spacer technology, a new architecture of FEDs, notably HOPFED, has been proposed recently to solve these two persistent hurdles for manufacturing FEDs.
But energy storage wouldn't care much about luminance non-uniformity, and Hubler seems to have determined that high-voltage breakdown is manageable in his configuration. Hubler and Canon, which acquired the ashes of Candescent from receivership, might want to talk. Sony, a major battery manufacturer as well as a former FED developer, might be another interested party.

Big Blue demos 100GHz chip


Time to rev up this blog thingie again. Lots is going on, including some developments that seem quite practical for commercialization in the not-too-distant future.

Consider, from The Register in England:

Big Blue demos 100GHz chip


IBM reseachers have made a breakthrough in the development of ultra-high-speed transistor design, creating a 100GHz graphene-based wafer-scale device. And that's just for starters.
The transistor that the researchers have developed is a relatively large one, with a gate length of 240 nanometers - speeds should increase as the gate length shrinks.
The field-effect transistor that the IBM team developed exploits what a paper published in the journal Science understates as the "very high carrier mobilities" of graphene, a one-atom-thick sheet of carbon atoms grown on a silicon substrate.
This extraordinarily thin sheet is grown on the silicon epitaxially, meaning that it's created in an ordered crystaline structure on top of another crystaline structure - in this case, good ol' garden-variety silicon.

I wrote about graphene back in the Summer of 2007, noting it seemed more tractable for utilization in manufacturing processes than its more-glamorous siblings, carbon nanotubes. And so it seems: the fact that IBM's development is based on "garden-variety silicon" is a wonderful testament to recombinant innovation and promises practical adoption before too long. It seems hackneyed and threadworn to haul out Moore's Law one more time, but here it is again, keeping pace.

16 July 2008

A guru on innovation takes note

My "Breakthrough Innovation" co-panelist, Patricia Seybold, has spotlighted my blog posts on HP's memristor and photonic interconnect developments.

Patty's attention is worthy of note, as she has a track record of pegging paradigm shifts in the technology world and guiding organizations to capitalize on them. Savvy strategists listen to her, and successful ones put her advice to work.

Most critical is her relentless focus on customers and her insistence on involving them early in the strategic and developmental process, central to the ecosystem of stakeholders in a commercial endeavor. As she states in her blog's definition of Outside Innovation--both her mantra and the title of one of her books:

What is Outside Innovation?

It’s when customers lead the design of your business processes, products, services, and business models. It’s when customers roll up their sleeves to co-design their products and your business. It’s when customers attract other customers to build a vital customer-centric ecosystem around your products and services. The good news is that customer-led innovation is one of the most predictably successful innovation processes. The bad news is that many managers and executives don’t yet believe in it. Today, that’s their loss. Ultimately, it may be their downfall.

Exactly right. Too many businesspeople are happy to say, "The customer is always right" while missing the opportunity to harness customers' insights early in the strategic or design process when it can have its most profound leverage and produce the most striking competitive advantage. As she notes:

HOW DO YOU WIN IN INNOVATION?
You no longer win by having the smartest engineers and scientists; you win by having the smartest customers!

...And listening to them.

A corollary to that is to seize every opportunity to advance the education--the "smartening"--of your customers. Including: facilitate their learning from each other.

The same goes for the rest of your enterprise's ecosystem. Regardless of your business, one of the most valuable take-aways from HP's purposeful ecosystem-building was illuminated in their Photonics Interconnect Forum when Jamie Beckett quoted HP Labs Director Prith Banerjee: "Not all the smart people work at HP." If so, then the ecosystem-building and customer-smartening is a way of multiplying the ones who work with them.

Call it a positive feedback mechanism for organizational IQ.

16 June 2008

"Fast bipolar nonvolatile switching," and why it changes everything


Yet more memristor news from HP Labs. An eye-blink after their stunning news of discovering the memristor's existence as the fourth fundamental passive electronic circuit element, comes Nature Nanotech's advance online publication of significant progress in the practical fabrication of these devices.

Authors Yang, Pickett, Li, Ohlberg, Stewart and Williams--all of HP Labs in Palo Alto--state:

We have built micro- and nanoscale TiO2 junction devices with platinum electrodes that exhibit fast bipolar nonvolatile switching. We demonstrate that switching involves changes to the electronic barrier at the Pt/TiO2 interface due to the drift of positively charged oxygen vacancies under an applied electric field. Vacancy drift towards the interface creates conducting channels that shunt, or short-circuit, the electronic barrier to switch ON. The drift of vacancies away from the interface annilihilates such channels, recovering the electronic barrier to switch OFF. Using this model we have built TiO2 crosspoints with engineered oxygen vacancy profiles that predictively control the switching polarity and conductance.

The keywords are "engineered oxygen vacancy profiles" and "predictively control." These indicate that memristors are hurtling from their emergence as a laboratory success-story to land square in the everyday IC design toolkit, right before our eyes. Remarkable!

More remarkable is the unusual behavior of these devices. First, there's no doubt that they'll be used as replacements for flash RAM and hard disks: they're faster, they use less energy, they would appear to be denser (50x50nm, in the versions discussed in Nature Nanotech), they operate in parallel, and scalability seems to be a real strength. Given the world's exponentiating appetite for information and information storage, this is all fine news for iPhones and laptops and other good things. But memristors' capability is not limited to storing 1's and 0's. That would be so 21st century. No, memristors can store values in-between. They are analog devices, and they will facilitate parallel analog computers. Those are common enough: you have one between your ears.

HP Labs' Jamie Beckett spoke with the researchers:
"A conventional device has just 0 and 1 – two states – this can be 0.2 or 0.5 or 0.9," says Yang. That in-between quality is what gives the memristor its potential for brain-like information processing... Any learning a computer displays today is the result of software. What we're talking about is the computer itself – the hardware – being able to learn."

Beckett elaborates,
...[Such a computer] could gain pattern-matching abilities [or could] adapt its user interface based on how you use it. These same abilities make it ideal for such artificial intelligence applications as recognizing faces or understanding speech.

Another benefit would be that such an architecture could be inherently self-optimizing. Presented with a repetitive task, or one requiring parallel processing, such a computer could be designed to route subtasks internally in increasingly efficient ways or using increasingly efficient algorithms. Tired of PCs that seem slower after a few months' use than when they were new? This computer would do just the opposite.

Beckett continues:
"When John Von Neumann first proposed computing machines 60 years ago, he proposed they function the way the brain does," says Stewart. "That would have meant analog parallel computing, but it was impossible to build at that time. Instead, we got digital serial computers."
Now it may be possible to build large-scale analog parallel computing machines, he says. "The advantage of those machines is that intrinsically they can do more learning."

Not science fiction. Science fact. And soon--sooner than any of us imagined--it will be up to the engineers and entrepreneurs to leverage this in fantastical ways, using memristors as routinely as they do transistors, invented just sixty years ago.


09 June 2008

Putting the "Lab" back in "LabVIEW" -- while spreading LabVIEW out of the lab


As my panel on "Breakthrough Innovation" at NI Week last year discussed, National Instruments and its LabVIEW programming environment are both an important part of the nanotechnology ecosystem and a fine case study of the commercial benefits of ecosystem-building and customer-driven, recombinant innovation.

So some media coverage of National Instruments caught my eye recently, such as:

National Instruments Announces Global Multi-core Programming Workshop

National Instruments (Nasdaq: NATI) today announced an initiative sponsored by Intel Corporation to deliver free, hands-on multi-core programming workshops based on the NI LabVIEW graphical programming language to engineers and scientists around the globe. The Multi-core Programming with NI LabVIEW Hands-On Workshop will be presented in 18 U.S. and Canadian cities beginning in May and 15 international cities this fall...

and


EETimes: LabView poised for parallel role


James Truchard thinks he may have one of the keys to the multicore era.
The chief executive of National Instruments believes the company's flagship LabView environment offers many of the tools parallel programmers need today. NI has plugged into parallel research efforts at Intel Corp. and Berkeley to make sure LabView evolves with the times...
LabView can automate the assignment of jobs to different cores and threads. It also can generate C code and offers tools to manually optimize CPU loading, debugging and the use of cache in multicore chips...
"This is going to be pretty painful for people," said Truchard. "Today's programming languages really weren't developed for parallel programming" and the new mechanisms Intel and others are developing to plug the holes "add a lot of complexity to the programming environment," he said...
Truchard notes that high school students in the annual First Robotics contest used LabView to program FPGAs. Last year NI set up a lab at Berkeley so students there could use its tools to prototype embedded system designs.

(How about that. High school and college students making their own silicon. Wow.)

The program is one of several at NI aimed at keeping LabView in the forefront of parallel programming research, currently one of the hottest topics in computer science, thanks to the move to multicore processors...

As you can deduce, there has been angst among analysts and engineers alike about the scarcity of software architectures supporting multiprocessing and parallelism.

Hardware has gotten ahead of software. But NI has had parallelism nailed in the architecture of LabVIEW for more than two decades, at least for scientific instrumentation and process automation. With the introduction of LabVIEW FPGA three years ago and LabVIEW 8.5 for multicore processors last year, support for true parallelism became not only possible but easy.

Now comes this "initiative sponsored by Intel."

The thing that raised my antennas is that the coverage underplays LabVIEW's traditional market focus on scientific instrumentation and process automation.

Is NI staging a breakout into general computing? Might LabVIEW be the new C++? Is NI taking the "Lab" out of "LabVIEW"?

Technically, there's no reason it couldn't happen. It is a fully-fledged programming environment.

If, say, The MathWorks issued a press release about Matlab, touting sponsorship by Intel, extolling multiprocessing support, commencing a worldwide tour that spotlights the ease with which parallelism can be leveraged and managed on its platform ...and avoiding mention of its foundational calculational capabilities... then we might all agree that would be news. And so is this.

So. Can they take LabVIEW out of the lab? Is that what this is about, even? Maybe I'm just running a fever, but that's how it hit me.

If I were to venture a suggestion, I'd advise that NI consider seeding the programming world with a version of LabVIEW that omits its instrumentation and heavy-duty numerical modules. Maybe even a free one, or one targeted at students. Include its current facilities for integrating traditional code, and watch what happens when folks realize how straightforward programming for multiprocessing can be.

Arguing against the notion is the dictum from page 102 of Scott's Big Book of Pithy Pronouncements: "The Great Unfunded Liability of technology is: Support." On the other hand, NI's experience with Lego Robotics might suggest an affordable upper bound for the incremental support-cost obligation.

The time is right for Dr. T to stake his claim as the Moses of Multiprocessing. He already has the stone tablets. (He got 'em from Jeff Kodosky.)


Happily, research customers can revel in another new initiative at NI, aimed at strengthening its traditional ties to the academic and research marketplaces. (After all, another dictum is, "Dance with the girl that brought ya.") I don't know all the details yet, but what I've heard is exciting for researchers at colleges and universities. It includes an NI Week Academic Forum, a focused day of presentations and discussions for research customers and industry partners, which extends NI Week one day forward. More at http://www.ni.com/niweek/academic.htm --and it appears you'll be hearing more about the broader aspects of this new outreach initiative in the coming weeks.




P.S. I learned today that my "Breakthrough Innovations" co-panelist, Dr. Andrew Hargadon, Director of the Center for Entrepreneurship and the Energy Efficiency Center at the University of California, Davis, will present the closing keynote address at this summer's NI Week. Excellent!

----------

UPDATE: 2 July 2008: The Mystery of the Exploding Hit-Meter has been solved, thanks to a kindly email from NI's Vincent Carpentier. Seems this post is linked in NI News for July '08. The whole newsletter is well worth visiting-- lots of exciting developments and interesting applications, with webcasts and other resources of interest to both LabVIEW users and just-plain-folks interested in the latest in computing and instrumentation.