This question arose from comments [1] about different kinds of progress in computing over the last 50 years or so.
I was asked by some of the other participants to raise it as a question to the whole forum.
The basic idea here is not to bash the current state of things but to try to understand something about the progress of coming up with fundamental new ideas and principles.
I claim that we need really new ideas in most areas of computing, and I would like to know of any important and powerful ones that have been done recently. If we can't really find them, then we should ask "Why?" and "What should we be doing?"
The Internet itself pre-dates 1980, but the World Wide Web ("distributed hypertext via simple mechanisms") as proposed and implemented by Tim Berners-Lee started in 1989/90.
While the idea of hypertext had existed before ( Nelson’s Xanadu [1] had tried to implement a distributed scheme), the WWW was a new approach for implementing a distributed hypertext system. Berners-Lee combined a simple client-server protocol, markup language, and addressing scheme in a way that was powerful and easy to implement.
I think most innovations are created in re-combining existing pieces in an original way. Each of the pieces of the WWW had existed in some form before, but the combination was obvious only in hindsight.
And I know for sure that you are using it right now.
[1] http://www.nytimes.com/2009/01/11/business/11stream.htmlFree Software Foundation [1] (Established 1985)
Even if you aren't a wholehearted supporter of their philosophy, the ideas that they have been pushing, of free software, open-source has had an amazing influence on the software industry and content in general (e.g. Wikipedia).
[1] http://en.wikipedia.org/wiki/Free_Software_FoundationI think it's fair to say that in 1980, if you were using a computer, you were either getting paid for it or you were a geek... so what's changed?
Printers and consumer-level desktop publishing. Meant you didn't need a printing press to make high-volume, high-quality printed material. That was big - of course, nowadays we completely take it for granted, and mostly we don't even bother with the printing part because everyone's online anyway.
Colour. Seriously. Colour screens made a huge difference to non-geeks' perception of games & applications. Suddenly games seemed less like hard work and more like watching TV, which opened the doors for Sega, Nintendo, Atari et al to bring consumer gaming into the home.
Media compression (MP3s and video files). And a whole bunch of things - like TiVO and iPods - that we don't really think of as computers any more because they're so ubiquitous and so user-friendly. But they are.
The common thread here, I think, is stuff that was once impossible (making printed documents; reproducing colour images accurately; sending messages around the world in real time; distributing audio and video material), and was then expensive because of the equipment and logistics involved, and is now consumer-level. So - what are big corporates doing now that used to be impossible but might be cool if we can work out how to do it small & cheap?
Anything that still involves physical transportation is interesting to look at. Video conferencing hasn't replaced real meetings (yet) - but with the right technology, it still might. Some recreational travel could be eliminated by a full-sensory immersive environment - home cinema is a trivial example; another is the "virtual golf course" in an office building in Soho, where you play 18 holes of real golf on a simulated course.
For me, though, the next really big thing is going to be fabrication. Making things. Spoons and guitars and chairs and clothing and cars and tiles and stuff. Things that still rely on a manufacturing and distribution infrastructure. I don't have to go to a store to buy a movie or an album any more - how long until I don't have to go to the store for clothing and kitchenware?
Sure, there are interesting developments going on with OLED displays and GPS and mobile broadband and IoC containers and scripting and "the cloud" - but it's all still just new-fangled ways of putting pictures on a screen. I can print my own photos and write my own web pages, but I want to be able to fabricate a linen basket that fits exactly into that nook beside my desk, and a mounting bracket for sticking my guitar FX unit to my desk, and something for clipping my cellphone to my bike handlebars.
Not programming related? No... but in 1980, neither was sound production. Or video distribution. Or sending messages to your relatives in Zambia. Think big, people... :)
Package management and distributed revision control.
These patterns in the way software is developed and distributed are quite recent, and are still just beginning to make an impact.
Ian Murdock has called package management [1] "the single biggest advancement Linux has brought to the industry". Well, he would, but he has a point. The way software is installed has changed significantly since 1980, but most computer users still haven't experienced this change.
Joel and Jeff have been talking about revision control (or version control, or source control) with Eric Sink [2] in Podcast #36 [3]. It seems most developers haven't yet caught up with centralized systems, and DVCS is widely seen as mysterious and unnecessary.
From the Podcast 36 transcript [4]:
[1] http://en.wikipedia.org/wiki/Package_management_system#Impact0:06:37
Atwood: ... If you assume -- and this is a big assumption -- that most developers have kinda sorta mastered fundamental source control -- which I find not to be true, frankly...
Spolsky: No. Most of them, even if they have, it's the check-in, check-out that they understand, but branching and merging -- that confuses the heck out of them.
With distributed version control, the distributed part is actually not the most interesting part.
- Benjamin Crouzier
BitTorrent [1]. It completely turns what previously seemed like an obviously immutable rule on its head - the time it takes for a single person to download a file over the Internet grows in proportion to the number of people downloading it. It also addresses the flaws of previous peer-to-peer solutions, particularly around 'leeching', in a way that is organic to the solution itself.
BitTorrent elegantly turns what is normally a disadvantage - many users trying to download a single file simultaneously - into an advantage, distributing the file geographically as a natural part of the download process. Its strategy for optimizing the use of bandwidth between two peers discourages leeching as a side-effect - it is in the best interest of all participants to enforce throttling.
It is one of those ideas which, once someone else invents it, seems simple, if not obvious.
[1] http://en.wikipedia.org/wiki/BitTorrent_%28protocol%29Damas-Milner type inference (often called Hindley-Milner type inference) was published in 1983 and has been the basis of every sophisticated static type system since. It was a genuinely new idea in programming languages (admitted based on ideas published in the 1970s, but not made practical until after 1980). In terms of importance I put it up with Self and the techniques used to implement Self; in terms of influence it has no peer. (The rest of the OO world is still doing variations on Smalltalk or Simula.)
Variations on type inference are still playing out; the variation I would single out the most is Wadler and Blott's type class mechanism for resolving overloading, which was later discovered to offer very powerful mechanisms for programming at the type level. The end to this story is still being written.
Here's a plug for Google map-reduce, not just for itself, but as a proxy for Google's achievement of running fast, reliable services on top of farms of unreliable, commodity machines. Definitely an important invention and totally different from the big-iron mainframe approaches to heavyweight computation that ruled the roost in 1980.
Tagging, the way information is categorized. Yes, the little boxes of text under each question.
It is amazing that it took about 30 years to invent tagging. We used lists and tables of contents; we used things which are optimized for printed books.
However 30 years is much shorter than the time people needed to realize that printed books can be in smaller format. People can keep books in hands.
I think that the tagging concept is underestimated among core CS guys. All research is focused on natural language processing (top-down approach). But tagging is the first language in which computers and people can both understand well. It is a bottom-up approach that makes computers use natural languages.
I think we are looking at this the wrong way and drawing the wrong conclusions. If I get this right, the cycle goes:
Idea -> first implementation -> minority adoption -> critical mass -> commodity product
From the very first idea to the commodity, you often have centuries, assuming the idea ever makes it to that stage. Da Vinci may have drawn some kind of helicopter in 1493 but it took about 400 years to get an actual machine capable of lifting itself off the ground.
From William Bourne's first description of a submarine in 1580 to the first implementation in 1800, you have 220 years and current submarines are still at an infancy stage: we almost know nothing of underwater traveling (with 2/3rdof the planet under sea, think of the potential real estate ;).
And there is no telling that there wasn't earlier, much earlier ideas that we just never heard of. Based on some legends, it looks like Alexander the Great used some kind of diving bell in 332 BC (which is the basic idea of a submarine: a device to carry people and air supply below the sea). Counting that, we are looking at 2000 years from idea (even with a basic prototype) to product.
What I am saying is that looking today for implementations, let alone products, that were not even ideas prior to 1980 is ... I betcha the "quick sort" algorithm was used by some no name file clerk in ancient China. So what?
There were networked computers 40 years ago, sure, but that didn't compare with today's Internet. The basic idea/technology was there, but regardless you couldn't play a game of Warcraft online.
I claim that we need really new ideas in most areas of computing, and I would like to know of any important and powerful ones that have been done recently. If we can't really find them, then we should ask "Why?" and "What should we be doing?"
Historically, we have never been able to "find them" that close from the idea, that fast. I think the cycle is getting faster, but computing is still darn young.
Currently, I am trying to figure out how to make an hologram (the Star Wars kind, without any physical support). I think I know how to make it work. I haven't even gathered the tools, materials, funding and yet even if I was to succeed to any degree, the actual idea would already be several decades old, at the very least and related implementations/technologies have been used for just as long.
As soon as you start listing actual products, you can be pretty sure that concepts and first implementations existed a while ago. Doesn't matter.
You could argue with some reason that nothing is new, ever, or that everything is new, always. That's philosophy and both viewpoints can be defended.
From a practical viewpoint, truth lies somewhere in between. Truth is not a binary concept, boolean logic be damned.
The Chinese may have come up with the printing press a while back, but it's only been about 10 years that most people can print decent color photos at home for a reasonable price.
Invention is nowhere and everywhere, depending on your criteria and frame of reference.
Google's Page Rank [1] algorithm. While it could be seen as just a refinement of web crawling search engines, I would point out that they too were developed post-1980.
[1] http://en.wikipedia.org/wiki/PageRankDNS, 1983, and dependent advances like email host resolution via MX records instead of bang-paths. *shudder*
Zeroconf working on top of DNS, 2000. I plug my printer into the network and my laptop sees it. I start a web server on the network and my browser sees it. (Assuming they broadcast their availability.)
NTP (1985) based on Marzullo's algorithm (1984). Accurate time over jittery networks.
The mouse scroll wheel, 1995. Using mice without it feels so primitive. And no, it's not something that Engelbart's team thought of and forgot to mention. At least not when I asked someone who was on the team at the time. (It was at some Engelbart event in 1998 or so. I got to handle one of the first mice.)
Unicode, 1987, and its dependent advances for different types of encoding, normalization, bidirectional text, etc.
Yes, it's pretty common for people to use all 5 of these every day.
Are these "really new ideas?" After all, there were mice, there were character encodings, there was network timekeeping. Tell me how I can distinguish between "new" and "really new" and I'll answer that one for you. My intuition says that these are new enough.
In smaller domains there are easily more recent advances. In bioinformatics, for example, Smith-Waterman (1981) and more especially BLAST (1990) effectively make the field possible. But it sounds like you're asking for ideas which are very broad across the entire field of computing, and the low-hanging fruit gets picked first. Thus is it always with a new field.
What about digital cameras?
According to Wikipedia, the first true digital camera [1] appeared in 1988, with mass market digital cameras becoming affordable in the late 1990s.
[1] http://en.wikipedia.org/wiki/History_of_the_camera#The_arrival_of_true_digital_camerasModern shading languages and the prevalence of modern GPUs.
The GPU is also a low cost parallel supercomputer with tools like CUDA and OpenCL for blazing fast high level parallel code. Thank you to all those gamers out there driving down the prices of these increasingly impressive hardware marvels. In the next five years I hope every new computer sold (and iPhones too) will have the ability to run massively parallel code as a basic assumption, much like 24 bit color or 32 bit protected mode.
JIT compilation was invented in the late 1980s.
To address the two questions about "Why the death of new ideas", and "what to do about it"?
I suspect a lot of the lack of progress is due to the massive influx of capital and entrenched wealth in the industry. Sounds counterintuitive, but I think it's become conventional wisdom that any new idea gets one shot; if it doesn't make it at the first try, it can't come back. It gets bought by someone with entrenched interests, or just FAILs, and the energy is gone. A couple examples are tablet computers, and integrated office software. The Newton and several others had real potential, but ended up (through competitive attrition and bad judgment) squandering their birthrights, killing whole categories. (I was especially fond of Ashton Tate's Framework; but I'm still stuck with Word and Excel).
What to do? The first thing that comes to mind is Wm. Shakespeare's advice: "Let's kill all the lawyers." But now they're too well armed, I'm afraid. I actually think the best alternative is to find an Open Source initiative of some kind. They seem to maintain accessibility and incremental improvement better than the alternatives. But the industry has gotten big enough so that some kind of organic collaborative mechanism is necessary to get traction.
I also think that there's a dynamic that says that the entrenched interests (especially platforms) require a substantial amount of change - churn - to justify continuing revenue streams; and this absorbs a lot of creative energy that could have been spent in better ways. Look how much time we spend treading water with the newest iteration from Microsoft or Sun or Linux or Firefox, making changes to systems that for the most part work fine already. It's not because they are evil, it's just built into the industry. There's no such thing as Stable Equilibrium; all the feedback mechanisms are positive, favoring change over stability. (Did you ever see a feature withdrawn, or a change retracted?)
The other clue that has been discussed on SO is the Skunkworks Syndrome (ref: Geoffrey Moore): real innovation in large organizations almost always (90%+) shows up in unauthorized projects that emerge spontaneously, fueled exclusively by individual or small group initiative (and more often than not opposed by formal management hierarchies). So: Question Authority, Buck the System.
One thing that astounds me is the humble spreadsheet. Non-programmer folk build wild and wonderful solutions to real world problems with a simple grid of formula. Replicating their efforts in desktop application often takes 10 to 100 times longer than it took to write the spreadsheet and the resulting application is often harder to use and full of bugs!
I believe the key to the success of the spreadsheet is automatic dependency analysis. If the user of the spreadsheet was forced to use the observer pattern, they'd have no chance of getting it right.
So, the big advance is automatic dependency analysis. Now why hasn't any modern platform (Java, .Net, Web Services) built this into the core of the system? Especially in a day and age of scaling through parallelization - a graph of dependencies leads to parallel recomputation trivially.
Edit: Dang - just checked. VisiCalc was released in 1979 - let's pretend it's a post-1980 invention.
Edit2: Seems that the spreadsheet is already noted by Alan anyway - if the question that bought him to this forum [1] is correct!
[1] https://stackoverflow.com/questions/357813/help-me-remember-a-quote-from-alan-kaySoftware:
Virtualization and emulation
P2P data transfers
community-driven projects like Wikipedia, SETI@home ...
web crawling and web search engines, i.e. indexing information that is spread out all over the world
Hardware:
the modular PC
E-paper
The rediscovery of the monad by functional programming researchers. The monad was instrumental in allowing a pure, lazy language (Haskell) to become a practical tool; it has also influenced the design of combinator libraries (monadic parser combinators have even found their way into Python).
Moggi's "A category-theoretic account of program modules" (1989) is generally credited with bringing monads into view for effectful computation; Wadler's work (for example, "Imperative functional programming" (1993)) presented monads as practical tool.
Shrinkwrap software
Before 1980, software was mostly specially written. If you ran a business, and wanted to computerize, you'd typically get a computer and compiler and database, and get your own stuff written. Business software was typically written to adapt to business practices. This is not to say there was no canned software (I worked with SPSS before 1980), but it wasn't the norm, and what I saw tended to be infrastructure and research software.
Nowadays, you can go to a computer store and find, on the shelf, everything you need to run a small business. It isn't designed to fit seamlessly into whatever practices you used to have, but it will work well once you learn to work more or less according to its workflow. Large businesses are a lot closer to shrinkwrap than they used to be, with things like SAP and PeopleSoft.
It isn't a clean break, but after 1980 there was a very definite shift from expensive custom software to low-cost off-the-shelf software, and flexibility shifted from software to business procedures.
It also affected the economics of software. Custom software solutions can be profitable, but it doesn't scale. You can only charge one client so much, and you can't sell the same thing to multiple clients. With shrinkwrap software, you can sell lots and lots of the same thing, amortizing development costs over a very large sales base. (You do have to provide support, but that scales. Just consider it a marginal cost of selling the software.)
Theoretically, where there are big winners from a change, there are going to be losers. So far, the business of software has kept expanding, so that as areas become commoditized other areas open up. This is likely to come to an end sometime, and moderately talented developers will find themselves in a real crunch, unable to work for the big boys and crowded out of the market. (This presumably happens for other fields; I suspect the demand for accountants is much smaller than it would be without QuickBooks and the like.)
Outside of hardware innovations, I tend to find that there is little or nothing new under the sun. Most of the really big ideas date back to people like von Neumann and Alan Turing.
A lot of things that are labelled 'technology' these days are really just a program or library somebody wrote, or a retread of an old idea with a new metaphor, acronym, or brand name.
Computer Worms were researched in the early eighties of the last century in the Xerox Palo Alto Research Center.
From John Shoch's and Jon Hupp's The "Worm" Programs - Early Experience with a Distributed Computation [1]" (Communications of the ACM, March 1982 Volume 25 Number 3, pp.172-180, march 1982):
In The Shockwave Rider [2], J. Brunner [3] developed the notion of an omnipotent "tapeworm" program running loose through a network of computers - an idea which may seem rather disturbing, but which is also quite beyond our current capabilities. The basic model, however, remains a very provocative one: a program or a computation that can move from machine to machine, harnessing resources as needed, and replicating itself when necessary.
In a similar vein, we once described a computational model based upon the classic science-fiction film, The Blob [4]: a program that started out running in one machine, but as its appetite for computing cycles grew, it could reach out, find unused machines, and grow to encompass those resources. In the middle of the night, such a program could mobilize hundreds of machines in one building; in the morning, as users reclaimed their machines, the "blob" would have to retreat in an orderly manner, gathering up the intermediate results of its computation. Holed up in one or two machines during the day, the program could emerge again later as resources became available, again expanding the computation. (This affinity for nighttime exploration led one researcher to describe these as "vampire programs.")
Quoting Alan Kay: "The best way to predict the future is to invent it."
[1] http://vx.netlux.org/lib/ajm01.htmlBetter user interfaces.
Today’s user interfaces still suck. And I don't mean in small ways but in large, fundamental ways. I can't help but to notice that even the best programs still have interfaces that are either extremely complex or that require a lot of abstract thinking in other ways, and that just don't approach the ease of conventional, non-software tools.
Granted, this is due to the fact that software allows to do so much more than conventional tools. That's no reason to accept the status quo though. Additionally, most software is simply not well done.
In general, applications still lack a certain “just works” feeling are too much oriented by what can be done, rather than what should be done. One point that has been raised time and again, and that is still not solved, is the point of saving. Applications crash, destroying hours of work. I have the habit of pressing Ctrl+S every few seconds (of course, this no longer works in web applications). Why do I have to do this? It's mind-numbingly stupid. This is clearly a task for automation. Of course, the application also has to save a diff for every modification I make (basically an infinite undo list) in case I make an error.
Solving this probem isn't even actually hard. It would just be hard to implement it in every application since there is no good API to do this. Programming tools and libraries have to improve significantly before allowing an effortless implementation of such effords across all platforms and programs, for all file formats with arbitrary backup storage and no required user interaction. But it is a necessary step before we finally start writing “good” applications instead of merely adequate ones.
I believe that Apple currently approximates the “just works” feeling best in some regards. Take for example their newest version of iPhoto which features a face recognition that automatically groups photos by people appearing in them. That is a classical task that the user does not want to do manually and doesn't understand why the computer doesn't do it automatically. And even iPhoto is still a very long way from a good UI, since said feature still requires ultimate confirmation by the user (for each photo!), since the face recognition engine isn't perfect.
HTM systems ( Hiearchical Temporal Memory [1]).
A new approach to Artifical Intelligence, initiated by Jeff Hawkins through the book " On Intelligence [2]".
Now active as a company called Numenta [3] where these ideas are put to the test through development of "true" AI, with an invitation to the community to participate by using the system through SDKs.
It's more about building machine intelligence from the ground up, rather than trying to emulate human reasoning.
[1] http://en.wikipedia.org/wiki/Hierarchical_temporal_memoryThe use of Physics in Human Computer interaction to provide an alternative, understandable metaphor. This combined with gestures and haptics will likely result in a replacment for the current common GUI metaphor invented in the 70's and in common use since the mid to late 80's.
The computing power wasn't present in 1980 to make that possible. I believe Games [1] likely led the way here. An example can easily be seen in the interaction of list scrolling in the iPod Touch/iPhone. The interaction mechanism relies on the intuition of how momentum and friction work in the real world to provide a simple way to scroll a list of items, and the usability relies on the physical gesture that cause the scroll.
[1] http://www.gamasutra.com/view/feature/2798/physics_in_games_a_new_gameplay_.phpI believe Unit Testing, TDD and Continuous Integration are significant inventions after 1980.
Mobile phones.
While the first "wireless phone" patent was in 1908, and they were cooking for a long time (0G in 1945, 1G launched in Japan in 1979), modern 2G digital cell phones didn't appear until 1991. SMS didn't exist until 1993, and Internet access appeared in 1999.
I started programming Jan 2nd 1980. I've tried to think about significant new inventions over my career. I struggle to think of any. Most of what I consider significant were actually invented prior to 1980 but then weren't widely adopted or improved until after.
While the hardware has improved tremendously the software industry has struggled to keep up. We are light years ahead of 1980, but most improvements have been refinements rather than inventions. Since 1980 we have been too busy applying what the advancements let us do rather than inventing. By themselves most of these incremental inventions are not important or powerful, but when you look back over the last 29 years they are quite powerful.
We probably need to embrace the incremental improvements and steer them. I believe that truly original ideas will probably come from people with little exposure to computers and they are becoming harder to find.
Nothing.
I think it's because people have changed their attitudes. People used to believe that if they would just find that "big idea", then they would strike it rich. Today, people believe that it is the execution and not the discovery that pays out the most. You have mantras such as "ideas are a dime a dozen" and "the second mouse gets the cheese". So people are focused on exploiting existing ideas rather than coming up with new ones.
Open Source community development.
The iPad [1] (released April 2010): surely such a concept is absolutely revolutionary!
alt text http://www.ubergizmo.com/photos/2010/1/apple-ipad//apple-ipad-05.JPG [2]
No way Alan Kay saw that coming from the 1970's!
Imagine such a "personal, portable information manipulator"...
...
Wait? What!? The Dynabook [3] you say?
Thought out by Alan Kay as early as 1968, and described in great details in this 1972 paper [4]??
NOOOoooooooo....
Oh well... never mind.
[1] http://en.wikipedia.org/wiki/IPadIdeas around Social Computing have had advances since the 1980. The Well [1] started in 1985. While I'm sure there were online communities before, I believe some of the true insights in the area have happened post 1980. The adverse dynamic aspects of social communities and their interaction on a software system are much like the disasters of the Tacoma Narrows Bridge [2].
I think Clay Shirky's [3] work in the area illuminates those effects and how to mitigate them. I'd say interesting real world examples of social software insights include things like reCAPTCHA [4] and Wikipedia [5], where significant valuable work is done by the participants mediated by the software.
[1] http://en.wikipedia.org/wiki/The_WELLI think the best ideas invented since the 1980's will be the ones that we're not aware of. Either because they are so small and ubiquitous as to be unnoticable, or because their popularity hasn't really taken off.
One example of the former is Clicking and Dragging to select a portion of text. I believe this first appeared on the Macintosh in 1984. Before that you had seperate buttons for picking the beginning of a selection, and the end of a selection. Quite onerous.
An example of the latter is (may be) Visual Programming languages. I'm not talking like hypercard, I mean like Max/MSP, Prograph, Quartz Composer, yahoo pipes, etc. At the moment they are really niche, but how I see it, is that there's really nothing stopping them from being just as expressive and powerful as a standard programming language, except for mindshare.
Visual programming languages effectively enforce the functional programming paradigm of referential transparency. This is a really useful property for code to have. The way they enforce this isn't artificial either- it's simply by virtue of the metaphore they use.
VPL's make programming accessible to people who would not otherwise be able to program, such as people with language difficulties, like dyslexia, or even just laymen that need to whip up a simple time-saver. Professional programmmers may scoff at this, but personally, I think it would be great if programming became a really ubiquitous skill, like literacy.
As it stands though, VPL's are reall a niche interest, and haven't really got particularly mainstream.
What we should do differently
all computer science majors should be required to double major- coupling the CS major with one of the humanities. Painting, literature, design, psychology, history, english, whatever. A lot of the problem is that the industry is populated with people that have a really narrow and unimaginative understanding of the world, and therefore can't begin to imagine a computer working any significantly differently than it already does. (if it helps, you can imagine that I'm talking about someone other than you, the person reading this.) Mathematics is great, but in the end it's just a tool for achieving. we need experts who understand the nature of creativity, who also understand technology.
But even if we have them, there needs to be an environment where there's a possibility that doing something new would be worth the risk. It's 100 times more likely that anything truly new gets rejected out of hand, rather viciously. (the newton is an example of this). so we need a much higher tolerance for failure. We should not be afraid to try an idea which has failed in the past. We should not fully reject our own failures- and we should learn to recognize when we have failed. We should not see failure as a bad thing, and so we shouldn't lie to ourselves or to others about it. We should just get used to it, because it is just about the only constant in this ever changing industry. Post mortems are useful in this regard.
One of the more interesting things, about smalltalk, I think, was not the language itself, but the process that was used to arrive at the design of smalltalk. The iterative design process, going through many many revisions- But also very carefully and critically identifying the flaws of the existing system, and finding solutions in the next one. The more perspectives, and the broader the perspectives we have on the situation, the better we can judge where the mistakes and problems are. So don't just study computer science. Study as many other academic subjects as you can get yourself to be interested in.
The pre-1980 days were, of course, the glory days of Xerox PARC. Back when the GUI, the mouse, the laser printer, the internet, and the personal computer were all being created. (Seeing as I'm too young to have been alive back then, and you were pretty much working on inventing all of those, I can't tell you anything about 1980 that you don't already know, so let's move on.)
The thing is, though, that the pre-1980 days were a lot more vibrant in terms of truly disruptive new technologies. That's the way it is with any new field -- hwo many game-changing technology advances have you seen in railroads in the past 100 years? How many have you seen in lightbulbs? In the printing press? Once something ignites a hype in the right circles, there is an explosive period of invention, followed by a long period of maturing. After that, you're not going to see the same kind of completely radical changes again UNLESS the basic circumstances change.
Luckily, that might be happening in a number of fields, and it has already happened in a few others:
Mobility - smart phones bring computing to a truly portable platform, which will soon include location-based services and proximity-based ad-hoc networks. It's a completely new paradigm that's potentially as game-changing as the GUI has been
The WWW (HTTP, HTML and DNS) has already been mentioned and is an obvious addition to the list, since it is enabling global, inexpensive, mainstream rich communication across the globe - all thanks to a computing platform
On the interface side, both touch, multitouch (Jeff Han comes to mind) and the Wiimote need mentioning. Currently, they are basically curiosities, but so were the early GUIs.
OOP design patterns -- higher level solutions as best practices to hard problems. Depending on your definition of 'computing', it may or may not belong on the list, but if you count OOP as a significant advance pre-1980 (I certainly do), I think design patterns and the GoF deserve a mention too
Google's PageRank and MapReduce algorithms - I am pleased to notice I wasn't the first to mention them, and seriously --- where would the world be without the principles of both of them? I vividly remember what the world looked like before them, and suffice it to say Google really IS my friend.
Non-volatile memory -- it's on the hardware side, but it is going to play a significant role in the future of computing - making bootup times a thing of the past, for example, and enabling us to use computers in entirely new ways
Semantic (natural language) search / analysis / classification / translation... We're not quite there yet, but companies like Powerset give the impression that we're on the brink.
On that note, intelligent HTMs should be on this list as well. I am yet another believer in Jeff Hawkins' model and approach, and if it works, it will mean a complete redefinition of what computers can do, what it means to be human, and where the world can go from here. Creating a real intelligence in that way (synthetically) would be bigger than anything the human race has accomplished before.
GNU + Linux
3D printing / rapid prototyping (and, in time, manufacturing)
P2P (which also lead to VoIP etc.)
E-ink, once the technologies mature a bit more
RFID might belong on the list, but the verdict is still out on that one
Quantum Computing is the most obvious element on the list, except we still haven't been able to get enough qubits to play along. However, my friends in the field tell me there's incredible progress going on even as we speak, so I'm holding my breath for that one.
And finally, I want to mention a personal favourite: distributed intelligence, or its other name: artificial artificial intelligence. The idea of connecting a huge number of people in a network and allowing them access to the combined minds of everyone else through some form of question answering interface. It's been done a number of times recently, with Yahoo Answers, Askville, Amazon Mechanical Turk, and so on, but in my mind, those are all missing the mark by a LOT... much like the many implementations of distributed hypertext that came before Tim Berners-Lee's HTML, or the many web crawlers before Google. Seriously -- someone needs to build an search interface into 'the hive mind' to blow everyone else out of the water. IMHO - it is only a matter of time.
Reorganization is what we need, not reinvention.
We have all the hardware and software components we need right now to do amazing things for years to come.
I believe there is a disease in the Sciences, where ever participant is always trying to invent something new to distinguish themselves from others. This is in contrast to doing some of the messy work of cataloging or teaching older works.
People who build 'new' things are generally considered of a higher pedigree than people who reuse existing and something almost ancient works. (Ancient to say a 20 year old to whom something like say Lisp was made more than double their life time in the past. 1958)
Good old ideas need to be resurrected and propagated far and wide, and we need to stop trying to build businesses or programmer movements that effectively trample old works and systems in power-plays to be the next new thing-when in fact most 'new shiny' things are just aspects of old ideas resurrected.
Effective Parallelization and Quantum Computing - I think these are two areas where progress has been made and much more progress will be made to make very significant changes to our use of computing power.
Effective Parallelization meaning parallelizing and distributing processing without the need for special programming techniques, but where it is built into the compiler/framework.
Flying cars and hoverboards. Oh wait, those haven't been invented yet. But by 2015, we have to have them. Otherwise Back To The Future 2 will have been a big lie!
One thing that hasn't changed in mainstream computing is the hierarchical filesystem. That's a shame, IMO, since some work was being done in the late 1980s and 1990s to design new kinds of file systems more appropriate for modern, object-oriented operating systems -- ones which are OO from the ground up.
The OO operating systems tended to have flat object stores that were expandable and flexible. I think the EROS Project [1] was one built around that idea; PenPoint OS [2] was an 1990s object-oriented OS; and Amazon S3 [3] of course is a contemporary, flat object store.
The are at least two ideas in OO, flat filesystems that I particularly liked:
The entire disk was essentially swap space. Objects exist in memory, get paged out when they are not needed, and brought back in when they are. There's no need for a hierarchical filesystem that's separate from virtual memory. Programs are "always running," in a sense.
A flat file/object store allows content to be indexed and searched, rather than forcing the user to decide -- ahead of time -- where the content will live in relation to other content and what its name shall be. A hierarchical system could be built on top of the flat storage, but it's not required.
As Alan Cooper states in his book, About Face [4], hierarchical filesystems are a kludge, designed for the computers of the 1960s and 1970s with limited memory and disk storage. Sadly, the popularity of Windows and Unix have guaranteed the dominance of the hierarchical filesystem to this day.
[1] http://www.eros-os.org/design-notes/DiskFormatting.htmlPretty much everything important in modern 3D computer graphics. Ray-tracing (in the compute graphics sense) got its jump start from Whitted's 1980 paper. Marching cubes ('87) is the standard way to extract an isosurface from 3D data.
Virtual Worlds in which you are represented by a virtual alter ego (aka Avatar), for socializing and roleplaying.
Most commonly referred to as MMOs - Massive(ly) Multiplayer Online. Some popular examples include World of Warcraft, Everquest, Second Life.
PS: no, they still don't require the heavy headgear as typically depicted in geek movies of the 80s. It's a shame....
Touchscreens and Motion Sensing interfaces for human computer interaction.
For example:
Only question is ... are these technologies really post-80s?
As for programming concepts, IoC / Dependancy injection in 1988 with roots in 1983. Fowler has some notes on the history of the concept on his Bliki [1].
[1] http://martinfowler.com/bliki/InversionOfControl.htmlAccess to massive data.
The sheer size and scale of the data we have available these days is massive compared to what it used to be in the 80s. We've had to make a large number of changes to both our hardware and software to be able to store and display this stuff. One day, we'll actually learn how to qualify and mine it for something useful. Someday.
Paul.
The first thing to do is define invention [1], or else you'll get off on the wrong track. The second definition of invention from Dictionary.com says:
U.S. Patent Law. a new, useful process, machine, improvement, etc., that did not exist previously and that is recognized as the product of some unique intuition or genius, as distinguished from ordinary mechanical skill or craftsmanship.
Thus, since 1980, there have been very few new inventions in computing. What has there been? Obviously there has been large amounts of new technologies and new things coming about, but what are they?
We aren't inventing any more, we are improving what primarily exists already.
The CD, or compact disk, was first started in 1977 though they weren't accepted by industry until 1982. At this time the first factory for pressing CDs just came into readiness. Eventually, by 1985, the CD-ROM (Read-Only Memory) was accepted as a medium. The CD-RW followed 5 years later. (Source: Wikipedia [2])
Now what? Well, given that we have larger hard drives (still just improvement on the paradigm) we need more space to be able to supplant the VHS market and make videos compatible with computers. Thus came about the DVD, though I am cutting out many improvements to the existing CD technology.
The DVD came about, was "invented", during the year of 1995. (Source: Wikipedia [3])
Since then we have had:
Obviously this list isn't all inclusive. But spot the new invention, remember the definition I gave above, in that list. You can't! They're all just variations on the concept of an optical disc, all just variations on the same hardware, and all just variations on existing software.
Cost. See, it's cheaper economically to make incremental improvements to an existing product. If I can sell you a HD DVD or a Blu-ray Disc because you believe it to be necessary or cool, then I have no need to release my plans for the Triple or Quad layer DVDs. In fact, I can charge you through the nose just to get the new technology because you are an early adopter and you need my "new and improved!" hardware.
This is called either marketing, or product relations.
What about it? Pre-1980 there was a lot of software inventiveness going on, but since then it has mostly just been improvements on what already exists or reinvention of the wheel. Look at any OS or office package to see this.
As far as I'm concerned, there have been virtually no new inventions in the past 29 years. I could wax long and cross a great many industries, but why should I bother? Once you start thinking about it, and start comparing an "invention" to a prior, similar product ... you'll find it is so similar that it isn't even funny. Even the internal combustion engine has been around since 1906 with no new inventions in that field since then; many improvements and variations of this "wheel" yes, but no new inventions.
Not even that new weapon America deployed in Iraq--the one that uses microwaves to make a person feel shocked like they touched a lightbulb--is new. The same idea was used in security systems, then classified and taken off the market, with ultrasound to make an intruder feel physically ill. This is a directed form of the weapon with a different wavelength and application, not a new invention.
[1] http://dictionary.reference.com/browse/inventionElectrically Erasable Programmable Memory, generalized into non volatile read/write memory the most well known and ubiquitous currently being Flash. http://en.wikipedia.org/wiki/EEPROM lists this as being invented in 1984.
By giving the storage medium the same general physics, power requirements, size and stability as the processing units we remove this as a limiting factor in designs for where we place processors. This expands the possibilities for how and where we place 'intelligence' to such a plethora of smart devices (and things that would previously never have been candidates for being considered smart at all) that we are still taken up in the surge. Mp3 players are really just a fraction of this.
Optical computing. Seems like it should have been around longer but I can't currently find any references pre-dating 1982 or so (and the relevant piece of technology, the optical transistor, didn't pop up until 1986).
Well the World Wide Web has already been told, but more basically, I would say "DNS". Seems that it was invented in 1983 (http://en.wikipedia.org/wiki/Domain_Name_System) and IMHO we can consider that it's the mandatory link between invention of the internet protocol and the capability to spread all over the world what is now called the web.
Still in the "network" section, I would add WIFI. It was invented in the 90's (but I agree it's not exactly "computing", but more related to hardware).
In a more strict "algorithmic" section, I think about turbocodes (dated 1993); some say it's only closing the limit defined by the Shannon signal theory, but wouldn't this argument reject all other answers to "everything was already in seed in Lovelace, Babbage and Turing writings" ?
On the field of cryptography, I would add the PGP program from P.Zimmermann (dated 1991), which brought a quite robust (at this time) free encryption program to the citizen, and contributed to shake a little the government's posture about encryption. In fact I think it was one of the factor of cryptography "liberalization", which was a prerequisite for developing e-commerce.
The changes to infrastructure to allow accessible internet from home and office.
Documented and accepted standards from W3C through to APIs
Apart from that most of what we'd think of as new dates back a lot longer than you'd think (e.g. GUI, OOP).
I think the laptop was invented around 1980 and I also think that the development of laptops and portable computing changed a lot of people's lives - certainly those of us who work in IT, or who use computers and travel.
I'd say the biggest trend is an ever increasing lack of location dependence and pervasiveness. An interesting philosophical exercise these days is to count the computers in you immediate area. They're everywhere desktops, keyboards, microwaves, radios, televisions, cell phones etc... My grandmother computer is illiterate however her life is as infested with small computers as everyone else's. She can make a call to me from the middle of an empty field. I can then answer that call zipping down the highway.
Declarative Programming.
In 1979 "computer programs" were imperative. The programmer was expected to instruct the compiler on both what to do and how to do it. (N1)
Today, ASP.NET WebForms [1] and WPF [2] programmers regularly write code without knowing or caring how it will be implemented. Wikipedia [3] has other, less mainstream examples. Additionally, all of the SGML [4]-derived "markup" languages are declarative, and I doubt many of the programmers of 1979 would have predicted their importance or ubiquity in 30 years.
Although the concept of declarative programming existed before 1980 (see this paper [5] from 1975), it's invention took place with the introduction of Caml [6] in 1985 (debatable) or Haskell [7] in 1990 (less debatable). (N2) Since then, declarative programming has increased greatly in popularity. And, when massively multicore processors finally arrive, we'll all be declarative programmers.
--
Notes:
(N1) I can't vouch for this firsthand, since I was a fetus in 1979.
(N2) From other answers, it seems like people are confusing conception with invention. Da Vinci conceived of a helicopter, but he didn't invent it. The question is specifically on inventions in computing.
(N3) Please don't mention Prolog (rel. 1975) in the comments unless you have actually built an app in it.
Podcasting It allows for an informative way to distribute information and debate. I find it to be more interactive then standard interviews but have less noize then blog comments.
Instant Messaging [1] has been around from long time (mid to late 60), but IRC [2] did not come before 1988.
Video communication, on top of that, (as in, for instance,
Windows Live Messenger
[3], or Skype, or ...) really did
change the way we are communicating
[4] ;) and is much more recent.
<correction>
(see
VideoConferencing: 1968
[5],
alt text http://wpcontent.answers.com/wikipedia/en/thumb/6/64/On_Line_System_Videoconferencing_FJCC_1968.jpg/180px-On_Line_System_Videoconferencing_FJCC_1968.jpg
[6], as Alan Kay himself points out in the comment:
Again, please check out what Engelbart demoed in 1968 [7] (including live video chatting and screen sharing). IOW, guessing really doesn't work as well as looking things up. This is why most people make weak assumptions about when things were invented.)
Take that in my face ;), and rightfully so.
Note: the "webcam" (video setup) of those times were not exactly made for your average living-room ;)
</correction>
[... resuming the answer:]
The generalization of webcam [8] alt text http://wpcontent.answers.com/wikipedia/commons/thumb/c/c5/Logitech_Quickcam_Pro_4000.jpg/180px-Logitech_Quickcam_Pro_4000.jpg [9] helped too (Started in 1991, the first such camera, called the CoffeeCam, was pointed at the Trojan room coffee pot in the computer science department of Cambridge University).
So: Post-1980: 2 out of 3: IRC and Webcam.
[1] http://en.wikipedia.org/wiki/Instant_messaging“American’s have no past and no future, they live in an extended present.” This describes the state of computing. We live in the 80’s extended into the 21st century. The only thing that’s changed is the size. Alan Kay
Source: Alan Kay: Is Computer Science an Oxymoron? [1]
[1] http://www.windley.com/archives/2006/02/alan_kay_is_com.shtmlThe memristor.
While the idea is not newer than 1980, I believe a working model was not created until 2008. Should it make it past R&D, it will be the most significant advance in computer hardware since the transistor; at the very least, obviating secondary memory.
I claim that we need really new ideas in most areas of computing, and I would like to know of any important and powerful ones that have been done recently. If we can't really find them, then we should ask "Why?" and "What should we be doing?"
The way that I see it, we have not had so many new ideas in computing because we largely haven't needed them. We have been milking the old ideas, and getting so much out of them, such as the phenomenal growth of cpu speed.
When we need new ideas because the "well has run dry" so to speak, then we will see that necessity is the mother of invention.
The one activity I can think of that wasn't there in 1980 was Global Searching Across Disjoint Domains. i.e. google and a (very few) predecessors - all of which were well post-1980. Associated with conventions for syntactic markup,I think it qualifies as a "new idea"; but I think it also has only just begun; there's a lot of overhead space to build up into.
One device that has the potential to accelerate this already lightning-speed vector will soon emerge as the combination camera/GIS/phone/network. It creates the opportunity to automatically collect, classify, and aggregate datapoints in four-dimensional space for the first time. Even tedious manual collections of this type of data are sprouting; imagine when it's done by default.
For better or worse.
Design Patterns which brought computer science closer to computer engineering. GPS and internet address lookup for location based interactions. Service Oriented Architecture (SOA).
Open PC design that led to affordable components (except from Apple :-) and competition that drove innovation and lower prices. This caused the big change from the user going to the computer -- where there was a terminal to use -- to the computer coming to the user and appearing at home and even in ones lap.
Games With a Purpose [1] - Collective intelligence tools like Luis von Ahn and his team are developing might have been a dream before 1980, but there wasn't a widely deployed network with millions of people available and a need (e.g. reCAPTCHA [2]) to actually make it happen.
[1] http://www.gwap.com/IP Multicast (1991) and Van Jacobsen's Dissemination Networking [1] (2006) are the biggest inventions since 1989.
[1] http://video.google.com/videoplay?docid=-6972678839686672840&hl=enThis is a negative result, which is odd as a 'Fundemental innovation', but I think applies since it opened new areas of research, and closed off useless ones.
The impossibility of distributive consensus: PODC Influential Paper Award: 2001 [1]
[1] http://www.podc.org/influential/2001.htmlWe assumed that the main value of our impossibility result was to close off unproductive lines of research on trying to find fault-tolerant consensus algorithms. But much to our surprise, it opened up entirely new lines of research. There has been analysis of exactly what assumptions about the distributed system model are needed for the impossibility proof. Many related distributed problems to which the proof also applies have been found, together with seemingly similar problems which do have solutions. Eventually a long line of research developed in which primitives were classified based on their ability to implement wait-free fault-tolerant consensus.
Low cost/home computing. Something that (at least here in Blighty) wasn't really heard of until the early 1980s. Without home computing, how many people posting here would have got into computing as a career? Or even as a hobby 1 [1]?
Myself, had my folks not got Clive Sincliar's humble rubber-keyed ZX Spectrum [2] back in 1982/1983, I probably wouldn't be here now. And it wasn't just the Speecy: the C64 [3], Vic-20 [4], Acorn Electron [5], BBC A/B/Master [6], Oric-1 [7], Dragon-32 [8], etc. all fuelled the home computer market and made programmers out of every 8 year old boy and girl who had access to one.
If that wasn't a revolution in terms of computing and programming, I won't know what was...!
1 [9] curious aside: what is the breakdown of hobbyists vs pro programmers on this site? I realise these stats aren't collated, but could be interesting to know.
[1] http://en.wikipedia.org/wiki/ZX_SpectrumAugmented Reality. This hasn't really taken off yet, but as ideas go I think it is huge, from being able to paint virtual arrows on the ground to help you find your destination, to decorating everything around you with useful information or aesthetic fancies.
Imagine your phone ringing across the room, you look at it and a information bubble pops up above it to tell you who is calling. How cool would that be? AR will bring massive changes in the way we think about and interact with technology.
Haunted houses would probably get significantly scarier too.
I also wanted to mention Electroencephalography [1] for brain-computer interfacing, but apparently this was first invented in the 1970's.
[1] http://en.wikipedia.org/wiki/ElectroencephalographyVirtualization?
applications like VirualBox OSE or VMWare have saved me many hours.
Adoption of Object Orientation.
The idea was around earlier (e.g. Simula), but it became mainstream in the 1990s. (IMHO, one of its greatest benefits is having providing a common vocabulary amongst developers, so its widespread adoption made it much more valuable.)
I would also nominate 3D mouse. There are several variants in existance from early 1990s. For anyone working with 3D, things like SpaceNavigator [1] make life much easier. (Disclaimer: I'm not affiliated with 3Dconnexion in any way, just satisfied and now RSI-free user.)
[1] http://www.3dconnexion.com/3dmouse/spacenavigator.phpI belive that nothing important was invented.. but the perspective on software changed a lot since the '80s. Back then there were more theoreticians involved in this thing, and now you are asking this question on a programmers 'forum'.
Most of the ideas back then didn't get implemented, or when implemented they didn't had any real importance as the software industry did not exist, nor marketing or HR or development stages, or alpha versions:).
Another reason for this lack of inventions is the fact that most people use Windows:) dont get me wrong, i do hate M$, but look at it this way: you have a perfectly working interface, with nothing new to add to it, maybe just some new colored buttons. Its also closed enough so you wont be able to to anything with it without breaking it. Thats why i prefer open apps, this way you get more "open" people, to whom yo can actually talk, ask then questions, propose new ideeas that actually gets implemented, or at least put on an open todo-list, thus you get some kind of "evolution". You dont really see anything new because you are stuck with the same basic interface "invented" lots of years ago... did anyone actually tried ION window-manager in a production environment? It has a new kind of interface, and actually lets you do things faster, event it it looks quirky
M$, Adobe..you name it,holds lots of patents so you wont be able to base your work on them, or derivatives(you also wont know what kind of undeveloped tehnologies they hold). Look at MP3 and GIF as examples( i belive that they are both free formats now, but they are also kinda dead..) MP3 is the 'king' of audio evend if there are few algorithms out there much better that it..but didnt get enough traction because they weren't pushed on the consumer market. The GIF... come on, 256 colors??? From this point of voew i'm curios how many people from this thread are working on something "open" that will get to be reused in some other projects, and how many on "closed", protected by NDA's projects?
Even if it sounds kinda "free willy" kinda speech, back in the 80's the software was free, you got documentation for everything, and all hardware was more simple and easier to work with... and also more limited, so people didnt actually waste time to implement 3d games or web-pages but worked on real algorithms.
Ctrl-C + Ctrl-V + Ctrl-X combo :)
The first true multimedia personal computer, the Amiga: the first 32-bit preemptive multitasking personal computer, the first with hardware graphics acceleration, the first with multichannel sound and in many ways a far more useful and capable machine than the multicore, multigigahertz Windows boxen that proliferate today.
The Bizarre style of development (as described in http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ by Eric S Raymond). Raymond credits Linus Tourvald's release of the Linux kernel in 1991 as the first use of the Bizarre style of development.
Sensor networks: very tiny (nano scale) computers form ad-hoc p2p networks and transmit "sensory" information.
3D printing: Star Trek replicator for physical objects (no Early Grey tea yet).
DNA computing: Massively parallel computing for some types of problems.
Translation software with community support to make manual corrections and recommendations, followed up with an AI bot to form patterns to eventually distinguish and correctly predict ambiguity in different translations and contexts.
While it's true Google Translate [1] might not be that beast, it is the mother, or perhaps the grandmother of a system just waiting to be developed.
If you think about it - textual language is really input to the brain, the eyes see the text and sends images to the brain, which then translates this into understanding.
While its true communication (especially human communication) is an advanced topic, the basics are input (with context) -> translation -> understanding.
Why do we still have no really good way to send emails to distant co-workers, or partners who don't speak our language? This is obviously the Phase 1.
Once this is complete, we can move onto stuff like real-time phone call translation.
Instead month after month our greatest intelectual assets are involved in other more crucial projects, like space research, and meteor detection, or trying to prove the Bible wrong (yawn).
How about we dedicate more time to basic practical communication?
[1] http://en.wikipedia.org/wiki/Google_TranslateUSB Keys/Thumb drives
USB Keys were the effective replacement of the floppy, where the floppy was still superior to the CD or DVD in simple transfer.
I think a very important invention for computing in the past 50 years was GOOGLE. The internet means nothing without a good tool to search it. The advent of search engine revolutionized the internet and enabled it to be monetized by the little guy.
RAID [1] (1988).
Arguably this is just an application of error correction codes from years gone by, but then arguably everything in computer science can be reduced to basic mathematics which has been around for millennia.
[1] http://www.cs.cmu.edu/~garth/RAIDpaper/Patterson88.pdfAugmented Reality
Where a view the the real world is combined with virtual elements in some way.
The term Virtual Reality was coined in 1989 a few years before the term "Augmented Reality" came into existence.
Some early enabling technologies were invented before 1980 but the concept itself dates from the early nineties (at least that's what Wikipedia says.)
http://en.wikipedia.org/wiki/Augmented_reality#History
Maybe a forum of science fiction authors would give you more interesting answers? ;-)
I suspect theres a bit of a fallacy at work here, your viewing the history of technology and science as a steady march of progress, as a linear phenomenon. I suspect it is in fact a process of fits and starts, context, economics, serendipity and plain ole randomness.
You should feel fortunate that you were at the centre of one of the great waves of history, most people will never have that experience.
A few answers mention quantum computers as if they're still far in the future, but I beg to differ.
There were vague mentions of possibility of quantum computers in 1970s and 1980s (see timeline on Wikipedia [1]), however the first "working" 3-qubit NMR quantum computer was built in 1998. The field is still in infancy, and almost all progress is still theoretical and confined to academia, but in 2007 company called D-Wave Systems presented a prototype of a working 16-qubit, and later during the year 28-qubit adiabatic quantum computer. Their effort is notable since they claim that their technology is commercially viable and scalable. As of 2010, they have 7 rigs, current generation of their chips has 128 qubits. They seem to have partnered with Google to find interesting problems to test their hardware on.
I recommend this short 24-minute video [2] and Wikipedia article [3] on D-Wave for a quick overview, and there a lot more resources on this blog [4] written by D-Wave founder and CFO.
[1] http://en.wikipedia.org/wiki/Timeline_of_quantum_computingMPI and PVM for parallelization.
Utilization of functional programming/languages within OS core development.
'Singularity', and all projects like it, i.e. development of operating systems in managed code.
Not sure about 1980, but the AI community has been an idea-generator for decades, and they're still at it.
To answer a slightly different question. I think we need big ideas in the areas of Privacy, Trust and Reputation. My computer has the ability to capture almost everything about me, where I am, what I say, what I type, what I see,... A huge amount of information with an equally large number of entities (people, shops, sites, services) with whom I might want to share some of that information even if it's just a single piece of data.
My information needs to mine (not Google's, Facebook's or Apple's). My computer needs to use it on my behalf and so trust needs to be end-to-end. Then we can dis-intermediate the new information middle men.
(Widespread) Encryption. Without Encryption no financial transaction would ever take place. And this is still an area which can use more innovation and user friendlieness.
Multi-Agent Systems.
You can go back to distributed artificial intelligence roots, and I think still stay safely this side of the 80s.
There's many components to multi-agent systems, with lots of studies going into speech acts or cooperation, so it's rather difficult to point and say "See, here, this is different, innovative and important!" But I'll try anyway. :-)
I think the Belief-Desire-Intention model is particularly noteworthy. Agents have internally constructed models of the world. They have particular desires, or goals, and formulate plans on how to interact with the world as they know it to achieve those goals, thereby making up intentions.
Or, to use an analogy, the characters in Tron, the movie, have a certain understanding of how the world around them worked. They did not KNOW the whole world, and they could be mistaken about parts of it. But they had desires and goals, and they came up with plans to try to further that. If you saw Tron, I'm sure you'll get the analogy.
It hasn't had much an impact on computing YET. But, see, things that have impact on computing seems to take a few decades anyway. See: OOP, GC, bytecode compilation.
The massive increases in processor speed that have occurred over the last 30 years can't be overlooked. All manner of clever ideas such as pipelining and pre-emptive branching, as well as improvements in electronic side of processor design, mean that programmers today can worry more about the design and maintainability of their programs and worry less about counting clock-cycles.
The mouse - There have been posts about human interaction. To me, the mouse was the gateway to human interaction. Without it, we'd still be typing and not clicking in dragging, even with our fingers.
GUI - Complimented the mouse perfectly. I work in an environment where an as400 is the backend of one of our major apps. Yeah.. Interesting stuff but it just reminds me of the screens 'Bill Gates' is working in in the movie 'Pirates of Silicon Valley' even though that's not what it was. To me, 1 and 2 are the reason anybody, including grandpas and grandmas can use a computer.
Excel / spreadsheets - Someone mentioned this before but it's work mentioning again. It's so user friendly and is a great entry point for non-technical users to try their hand at simple programming concepts when performing calculations on cells. Granted it came out before 1980, but the versions post 1980 are when the technology in spreadsheets evolved.
Internet (of course) - Not sure how people wrote code without it! Don't flame me for repeating because this belongs on every list.
INTELLISENSE - LOVE IT LOVE IT LOVE IT!!!!
The successful integration of different programming paradigms into single programming environments.
The exemplar of this (for me) is the Mozart/Oz programming system [1], which integrates functional, OO, logic, concurrent and distributed programming mechanisms into a coherent whole. There are other examples though.
[1] http://mozart-oz.orgThe rise of motion sensors in gaming which does away with the traditional game joysticks and lets the user very close to the game itself. This complements our ever changing urban landscape and lifestyle where we have limited physical activity. This advancement in gaming definitely induces atleast some physical activity while doing something that one enjoys. It is definitely better than doing same mundane reps at your gym.
I think the most concepts in computing have mostly been undergoing refinements, but there have been some new developments, particularly in distributed computing.
The majority of work has been refinement, and while many modern systems are little better than the original concepts first described in the 60s or earlier, some are orders of magnitude better.
I would say that CDMA was/is an important and powerful new idea that was created after 1980.
c++ programming language (1983) template metaprogramming (1994)
X.500 [1] and the x.500 series of standards (circa 1988). While the x.500 standards were inspired by telco standards [2] dating back decades, they are significant as they paved the way for the widespread use of LDAP/AD and our current incantation of x.509 certificates to name a few.
[1] http://en.wikipedia.org/wiki/X.500A really hard question since, aside ridiculously improved hardware, there's few things that'd have been significantly positive inventions after that time. Though there are many significant inventions before 1980s that affect people only but now because they were infeasible back then.
Heck. Descent
the Enterprise Service Bus [1] would appear to be a fairly recent 'invention', though of course it is based on much older technologies.
[1] http://en.wikipedia.org/wiki/Enterprise_service_busThe Eclipse memory Analyzer [1]:
and it's of use of the Lengauer-Tarjan dominator tree algorithm [2] for memory usage analysis.
[1] http://www.eclipse.org/mat/Digital music synthesizers.
I think, the whole music scene was affected by the availability of cheap polyphonic synths. The early polyphonic synths where effectively multiple analog synths (discrete or using CEM or SSM chips). They were both expensive and very liited. During the 80's, the first digital systems arrived (I am not sure, but I think Kurzweil was one of the first). Today, mostly all are digital - even the analog ones are typically "virtual anlog".
regards
EDIT: oops - I just found out that the CMI fairlight was invented in 1978. So forget the above - sorry.
I'm not qualified to answer this in the general sense, but restricted to computer programming? Not much.
Why? I've been thinking about this for a while and I think we lack two things: a sense of history and a way to objectively judge everything we've produced. This isn't true in all cases but is in the general.
For history, I think it's just something not emphasized enough in popular writing or computer science programs. Take language features, for example. A canonical source might be HOPL, but it's definitely not common knowledge among programmers to be able to mark the point in time or in which language a feature like GC or closures first appeared. And of course after that there's knowledge of progression over time: how has OOP changed since Simula? Compare and contrast our sense of history with that of other fields like maybe political science or philosophy.
As for judgement, this is really a failure on our part to seek objective measures of success. Given foobar, in what measurable way has it improved some aspect in the act of programming where foobar is any of design patterns, agile methodology, TDD, etc etc. Have we even tried to measure this? What do we even want to measure? Correctness, programmer productivity, code legibility, etc? How? Software engineering should really be picking away at these questions, but I've yet to see it.
I think part of the problem with these answers is they are either not well researched or are attempting to a new implementation or some technology that has seen significant "improvements." However, this is not a significant invention. For instance, any talking about functional programming or object oriented programming just fails; most of these ideas have been circulating since before most of the participants of SO were born.
In order to start thinking about this, I need a model for what "innovation" means.
The best model I've seen is The Technology Adoption Life Cycle. You can get an overview at this Wikipedia Article [1].
Using this model, I began to ask myself... at what stage of the life cycle is software itself? We can think of "software" as a distinct technology from machinery going all the way back to Babbage, or perhaps more precisely, to Lady Ada Lovelace.
But it surely remained at the very early pioneering stage at least until about 1951. That's the year programmed computers "went commercial" in terms of selling a model for a computer product, and building lots of units of that model. I'm thinking of the machine that Univac sold to the Census Bureau.
From 1951 to about 1985, software innovations were numerous. They mostly had to do with extending the span of computing to an ever wider field of endeavor. In parallel, mass marketing and mass production kept bringing the cost of entry down till the Apple and IBM-PC made a programmable device a commonplace appliance.
Somewhere between 1980 and 1985, I'd say that software passed from the Innovator's domain to the "Early majority" domain. Sorry, guys, but that makes all of you that participated in MS-DOS, the Mac, Windows, C++ and Java eraly majority rather than innovators. That doesn't preclude your having done significant innovation on your own turf and in your own projects. It just means that the field itself had moved on from the earliest stage.
While the Internet's precursor had been around since the 1970s, it wasn't until Al Gore invented the internet (sorry) that everybody hooked up. At that stage, software passed from the early majority to the late majority. This shift was subtle, as the top of the bell curve suggests. Not every shop moved from early majority to late majority at the same time.
I don't think software has quite passed into the "laggard" stage yet, but I think that real innovators are tackling the problem of producing progress on different fronts today.
Two fronts that I can think of are Bioengineering and Information Appliances. Both of these fields require software, but the main thrust is not software innovation. It's applying software to uncharted territory. There are probably lots of other fronts that I'm not even aware of.
[1] http://en.wikipedia.org/wiki/Technology_adoption_lifecycleI would vote, as a Debian user, for package management. It makes OSX and Windows 7 look like primitive amateurish playthings.
But since package management was already mentioned, I will vote for X. The network transparent window server has made a lot of applications possible. It's wonderful to be able to seamlessly summon programs running on different computers side by side on the same screen.
And that was a tad more impressive in the late 80s.
Bitcoin [1]'s solution to the double-spending problem. It was used to create a decentralized electronic currency. A variant called Namecoin [2] uses the same technology to build a decentralized naming system (similar to DNS).
There were attempts to create cryptocurrency in the past (and the idea is certanly not new), but Bitcoin seems to be the first implementation which took off. Its unique P2P algorithm solves the double-spending problem without relying on any trusted authority.
[1] http://en.wikipedia.org/wiki/BitcoinProtected memory. Before protected memory if your program made a mistake, you could start executing code anywhere- virtually always hanging the entire machine. That's right, reboot time!
Low cost of hardware. My first computer cost $500 in 1978- a huge sum at the time. Lowering costs put PCs on every desk.
Natural Language Processing [1]. The first time I encountered this was in the early 1990s with a program from Symantec called Q&A [2] that let you query the database by typing English queries. I am still impressed by it to this day.
[1] http://en.wikipedia.org/wiki/Natural_Language_Processing#Short_history_of_evaluation_in_NLPStackOverFlow.com
Paxos protocol. It's difficult to describe how valuable it is in internet era.
FPGA [1]s are a major breakthrough invented after 1980.
[1] http://en.wikipedia.org/wiki/Field-programmable_gate_arrayComputer Graphics, Special Effects, and 3D Animation
Top ten software engineering ideas / picture [1]
[1] http://www.yourdonreport.com/wp-content/uploads/2007/11/compaidtoptenalb.pngI do not know if somebody has already answered, "machine learning" as a significant new development that is developing fast. With intelligent spam filtering, stock market predictions, intelligent machines like robots, ...
May be, machine intelligence might be the next big thing.
Let's see, Connection Machines (Massive Parallelism) for one.
Anyway, this whole question seems like an egoboo for Alan Kay since he invented everything.
The mathematics for quantum computing has been around since before 1980, but the hardware isn't here yet and may be physically and economically infeasible for many years to come.
The Personal Computer.
Hands down, the most important part of computing in the last thirty years is that everyone is now part of it. Computers for home use only date to 1977 or so, and widespread adoption took until well into the 80's. Now, kindergartens, senior centers, and every next door neighbor you'll ever have owns one.
The Internet.
That's it.
I'd have to say that the biggest invention in computing since 1980 is Moore's law. There were tons of really cool, innovative things created in the 1960s and 1970s - but they were insanely expensive one-off projects. And most of these projects are lost in the mists of time.
Today, the cool, innovative project gets a couple rounds of funding and is available on everybody's desktop or web browser in 6 months or so.
If that's not innovative, what is?
I would say Linux and the reification of the worse-is-better philosophy, but you can argue that those are older. So I´d say: quantum, chemical, peptide, dna, and membrane computing, (re)factoring in a non ad-hoc fashion and automated, aspects, generic programming, some types of type inference, some types of testing,
The reason why we have no new ideas: sw patents (this comes from the late 60s ...), corporations and education.
Personal Broadcast Communication
Facebook, Twitter, Buzz, Qaiku... the implementations are varying, focusing on different aspects - managed audience, conciseness, discussions. The specific services come and go, but the new concept of communication remains. Blogs are of course what started this, but the new services have made the communication socially connected, which is an essential difference.
Not quite sure if this exactly goes under the subject of computing, though, but it's something that's significant, and only made possible by computing and networks.
Open Croquet http://www.opencroquet.org - A Squeak, Smalltalk-based 3D environment which lets multiple users interact and program the environment from inside itself. It has it's own object replication protocol for sharing environments efficiently and scaleably over the internet. **It's difficult to describe because there just isn't anything else remotely like it...
1) I'm proposing this because when I try to explain to other people what it is I find them expecting me to compare it to other things... and I still haven't found anything remotely like it although there are many elements present from other systems (e.g. Smalltalk, Open GL, etoys, virtual worlds, remote collaboration, object-oriented replication architectures) the whole seems to be much more than the parts...
2) Unlike many of the technologies mentioned here it hasn't settled down into a widely exploited commercial niche...
Both points are signs of an early-stage technology.
I suspect that when Alan Kay started work on it, he might have been thinking about the theme of this question in the first place.
http://www.onlisareinsradar.com/archives/001281.php
Fast clustering algorithms ( O(n log n) in the number of data points ) such as DBScan (from 1996) [1] seem to all date from after 1980.
These have been part of general wave of progress in data-mining techniques.
Contrast this with lack of progress in line-finding for which poorly scaling techniques like the Hough still seem to represent the state of the art.
[1] http://en.wikipedia.org/wiki/DBSCANDOS. I'm not a DOS fan, but thanks to DOS and the IBM-PC computers are what they are today (for better or worse).
20 years ago: Object oriented programming - To better handle software complexity.
Now: Cloud computing - To better handle hardware complexity.
Future: something Declarative, but it will take another 20 years.
If we are serious about answering this question as a group.
I unfortunately believe we need more than a string of random well intentioned post !
I know, it sounds boring, getting thing done often is !
We Write a list of powerful ideas in the area of computing
Maybe we should define a few categories to separate each one because videoconference somehow does not fit well with object oriented programming.
Seeing ideas by categories makes it easier to generate them without redundancy.
It's too easy to sidetrack in teleportation if quantum computing is not kept away from flying cars.
Try to attribute each of them a date
This will settle the before/after 1980 and restrict debate about each idea to its own.
It will be fun to dig for earliest reference, first known implementation, etc.
Plus this will allow people like me who were 2 years old in 1980 to have a better idea
of what was common programming knowledge in 1980 (nothing beats being there at the time)
Try to attribute each of them the current state of their implementation
Ok, some idea were sci-fi in 1850, with early development in the 1970 and serious improvement breakthrough in the 1990.
Some ideas are just starting to get around. Some are almost forgotten.
Probably the wiki thing is a good idea.
I think this could really get somewhere if slightly organized.
I did not check, but maybe this whole thing already exist already on the net (I usually find that if you think about something, someone already did it).
What do you think ?
Cheers !
Perhaps the shift from client server to peer to peer. One of the reasons I hate the whole cloud/SAS thing is that it is a return to client/server.
I've got a VAX in my pocket and you want me to pretend it's a VT-100?
The teevee tube box
It's a little thing i like to call the internet
Software Patents