I've always heard that C is the language of choice to use for embedded systems, or anything that needs to run at maximum speed. I never developed a fondness for C, mostly because I don't like pointer arithmetic and the language is barely a rung above assembler.
On the other hand, ML languages are functional, garbage collected languages, and OCaml even has an object model, yet they have a reputation for being as fast as C. ML languages have the abstraction anyone could ask for to write high-level, concise code, yet it retains the speed necessary for writing high-performance applications.
OCaml in particular can be used anywhere that C is traditionally used, such as for embedded devices, graphics drivers, operating systems, etc. By all rights, OCaml should have taken over the world by now, but hardly anyone heard of the language yet alone used it.
This is a subjective question, but why have OCaml and ML other languages remained so obscure, while C and other languages became popular?
The first answer is that nobody really knows why languages become popular, and anybody who says otherwise is deluded or has an agenda. (It's often easy to identify why a language fails to become popular, but that's another question.)
With that disclaimer, here are some points that are suggestive, most important first:
The first mature C compiler appeared in 1974; the first mature OCaml compiler appeared in the late 1990s. C has a 25-year head start.
C shipped with Unix, which was the biggest "killer app" of all time. For a long time, every CS department in the world had to have Unix, which meant that every instructor and everyone taking a CS course had an opportunity to be exposed to C. OCaml and ML are still waiting for their first killer app. (MLdonkey is cool, but it's not Unix.)
C fills its niche so well that I doubt there will never be another low-level language devoted only to systems programming. (To see the evidence in favor, read Dennis Ritchie's paper on the history of C from HOPL II.) It's not even clear what OCaml's niche is, and Standard ML's niche is only a little clearer. So Caml and ML have quite a few competitors, whereas C killed off its only competitor (BLISS).
One of C's great strengths is that its cost model is very predictable: it is easy to look at any small fragment of C code can instantly get an accurate idea of what machine operations will have to be performed to execute that code. OCaml's cost model is much less clear, especially because memory allocation is much less explicit, and the overall cost of memory allocation (equals cost of allocation plus costs incurred during garbage collection) depends on emergent properties like how long objects live and which objects refer to other objects. The net result is that performance is hard to predict, and even hard to analyze after the fact. (OCaml's memory-profiling tools are not what they should be.) As a result, OCaml is not good for applications where performance must be very predictable---like embedded systems.
C is a language with a standard and many compilers. OCaml is a software artifact: the only compiler is from a single source, and the compiler is the standard. And that standard changes with every release. For people who value stability and backward compatibility, a single-source language may represent an unacceptable risk.
Anybody with a halfway-decent undergraduate compiler course and a lot of persistence can write a C compiler that more or less works, and with adequate performance. To get an implementation of OCaml or ML off the ground requires a lot more education, and to get comparable performance to a naive C compiler requires a lot more work. This means there are a lot fewer hobbyists to mess around with languages like OCaml, so it's harder tor the community to develop a deep understanding about how to exploit it.
I think the problem with OCaml is that it isn't too useful "out of the box". The eventual reason why people use a language is because it has libraries they need. With nothing "out of the box", though, nobody gets far enough into a project to realize that they need to write a library. The result is a language with no libraries, which makes it hard to write "real apps".
I think this is what OCaml suffers from -- nobody bothers to start "real projects" in it because all there is is a programming language. Yay, I can add two and two and print the result. The result is a collection of libraries that are mostly academic abandonware (the author got his PhD and moved on), which isn't too helpful for practicing programmers.
(I know there is work under way to change this, with projects like "Batteries Included". Come back here in 5 years, and perhaps OCaml will be more popular.)
There are some exceptions to this rule. Java started off with no libraries, but Sun paid people to write them all in house, and then they marked the hell out of it. Java certification, Java-specific hardware, Java books, Java classes, etc. Then even convinced most universities to teach it exclusively, even though it isn't a very good language to use for learning programming.
The result was popularity. Money can solve a lot of problems.
Over in the functional language arena, we can see that Haskell is becoming quite popular. I think most of the popularity is due to people like dons that write useful libraries, and never stop marketing the langauge. Every day you see a few Haskell articles on Programming Reddit. This keeps it stuck in peoples' minds until they finally decide, "I am going to try Haskell." When they do, they see useful things like web frameworks, object databases, OpenGL libraries, and XML processing libraries. This means that they can actually do something useful "Right Now". So between the potential to be productive and hearing about it a lot, Haskell has gained a lot of popularity.
CL has many of the same libraries as Haskell and is almost as fast, but nobody talks about it, so it "feels dead". Indeed #lisp is much quieter than #haskell, but Lisp is still a very productive language with a lot of libraries. No other language has SLIME. But marketing is very important, and Haskell does it better than Lisp or OCaml (and competes for the same userbase).
Finally, some people will never "get" programming, so breaking their mental model (variables are boxes with values, code executes top-to-bottom) will ensure that they don't use your language. This type of programmer is a large percentage of the programming population, so this further limits the possible userbase of abstract languages like Lisp, Haskell, and OCaml.
I like OCaml a lot as a language. BUT...
The tools support just isn't there. The debugger only works OK but does not work on windows (last I checked) and there just aren't that many development tools available for it.
Its type system is, at times, a bit too strong. For someone who does not understand how type inference works or the ML type system in general, the fact that he can't add an integer to a float is a major turn-off right away.
The standard library can sometimes have an inconsistent feel.
The object model seems somewhat tacked-on and the standard library barely uses it, opting instead for module-based libraries.
There are lots of other things that basically amount to the language not feeling "polished" and that drives people away during the very critical period when they're picking up a language and trying to decide whether or not they like it.
I think its most important legacy will be that it, along with other ML dialects, has had a very strong influence on other functional languages. Most of the current generation functional languages take the best elements from ML dialects and refine some of the annoyances.
This is a bit of an apples-to-oranges comparison. OCaml is a fairly young language  and there has never been a serious, sustained effort to push it into the mainstream (excepting Microsoft's current work with F#). Unlike C, it isn't the lingua franca of the most widely supported and imitated enterprise operating system (i.e., UNIX). Unlike Java, it hasn't had a major corporation pushing it as a next-generation computing platform. Unlike Perl, Python, and Ruby, it hasn't taken hold in an high-profile, influential niche (i.e., its niche is programming language and automated reasoning research—not very high-profile compared to web development). Hence, it is not super-popular.
 In fairness, the original ML language has been around since the 70s. But OCaml didn't appear until 1996 and it didn't inherit the Standard ML libraries. It is, in practical terms, a younger language than C, C++, Java, Python, Haskell, or even Ruby.
Embedded systems often require two things: speed and determinism. OCaml can provide speed, but the fact that it has a garbage collector makes it inherently nondeterministic, and for a real-time system that simple won't do.
The OCaml community failed to develop a large and reliable standard library (beyond what comes with OCaml today) that makes application development easy. There are several attempts to solve the problem but just take a look at Python or Ruby to see what is missing. OCaml is a great language if you want to solve an algorithmic problem that does not depend too much on having to interact with advanced standard modules like XML, networking, data calculation and so on, which you would rather not implement yourself.
I believe that part of the problem is how modules are mapped to files by OCaml: conceptually all *.ml files live in the same name space and directories have no meaning. This makes is hard for a community to evolve a library. If the compiler would map directory hierarchies into module hierarchies I would see a better chance that a standard library would evolve. This would, however, require considerable effort by the core compiler developers. (I am aware of packing modules but I think this is a kludge.)
Another library problem is binary compatibility between compiler releases. It's pretty safe to say that all library code must be re-compiled after a compiler upgrade. This makes it difficult to provide binary releases of modules or libraries.
Probably because too many people were taught ML as part of an introduction to weird and confusing theoretical stuff about types. That's what happened to me.
I was shown ML and Smalltalk around the same time. Smalltalk just looked damned cool, and it was immediately understandable what OO was for and how you could make pretty, interactive stuff in this environment. ML was about abstract mathematical things that didn't seem relevant to what I wanted to do. And unlike C, didn't promise to let me write fast games on 16-bit micros.
This is, of course, deeply unfair and subjective. But that's likely to be the true story for most people.
These days I guess the question would be : now I do feel I need to know this weird and confusing theoretical stuff about types, why would I choose ML over Haskell or Erlang?
I believe that the main issue is the lack of an actual standard library. Hence project OCaml Batteries Included , which is expected to largely improve the situation. It's supposed to enter Beta phase within a few days, so you'll have to ask the question again in one year or so. http://batteries.forge.ocamlcore.org/
I have enjoyed coding in both ML and C for a wide variety of projects. The thing preventing me from using ML in embedded projects (most of which have real time constraints, and require validation) is garbage collection.
There is research into memory management with regions (see MLKit ) but the complexity of the implementations and training required to use them properly (and attendant risks) have been an impediment to using them. http://www.itu.dk/research/mlkit/index.php/Main_Page
Well if it is about money as @jrockway say, we'll see if F# will gain the popularity like java or C#.
For me I guess developpers don't feel confortable with the functional way to do things (that's from the F# session in techdays 2009 where about 10 people said they kno function programming among almost 100 person).
I started OCAML this year, I've never get my hands dirty with functional programming, but now I really learn new things always from OCAML and the function way to solve problems, (but I can't say that I'll give up C# to use OCAML :)).
I agree that poor Windows support, a steep learning curve and slim standard library have all stifled OCaml's uptake in the past but I would add that there has been a huge lack of tutorial information (e.g. books) about OCaml compared to mainstream languages like Java.
Also, the kinds of people who know languages like OCaml are hugely heterogeneous. Among web programmers, maybe 1 in 1,000 will have heard of OCaml. Among people doing scientific computing at Cambridge University, about 90% of the people I knew were fluent in OCaml. Indeed, I was one of the last among my friends to learn OCaml. We even ran OCaml on our 256 CPU supercomputer...
I should also mention that these issues are rapidly being addressed. OCaml has reinvented itself for web programming recently with projects like Ocsigen and already has at least two major industrial success stories in that context. There is another new book on OCaml out now. The community are collaborating on a comprehensive standard library called "batteries included" that just went into beta release and looks fantastic. A multicore friendly version of OCaml is about to be released. The latest version of OCaml also includes many great new features such as lazy patterns and dynamically loaded native code OCaml libraries.
I think part of the problem is that functional programming is just not a natural way for most people to think (and I say this as someone who has a great interest in, and appreciation for, functional programming). This is compounded by the fact that the vast majority of programmers today started out learning procedural programming (most popular OOP languages are still procedural at heart) and so functional languages are hard to adjust to initially.
When I started university I already knew a reasonable amount of BASIC, C++ and Java and a bit of Pascal and x86 assembly language. I was far from an expert but had reached the (slightly naive) conclusion that all programming languages were basically the same with slightly different syntax. Our introduction to programming course used ML which rapidly disabused me of that notion. I had trouble getting my head around ML at that stage of my programming career and didn't really see the point of functional programming. I think it takes a bit more experience with some of the problems of procedural programming to really appreciate the benefits of a functional approach.
Our ML lecturer often claimed that expressing problems recursively was more 'natural' and easier than using loops or other procedural concepts. I was never convinced by that claim and still don't buy it. Recursive functions can sometimes provide particularly elegant and concise solutions to problems but I still find it an unnatural way to think about problems. Perhaps if you have a very strong mathematical background it seems more intuitive but I don't think it is easy for most people to think recursively. Given the centrality of recursive functions to the functional programming paradigm I think this may also be a reason for the lesser popularity of functional languages.
There is also a feedback effect to language popularity. When I started programming I wanted to know how to program graphical effects and games. After learning a bit of BBC BASIC and later QBASIC I naturally investigated what the most common languages used by the demo scene and games programmers were and set about learning C++ and x86 assembly. Nowadays some new programmers might want to know how to produce web applications and so will gravitate towards learning PHP, Ruby or C#. There are very few application areas for self motivated beginner programmers where the answer to 'what is the best language to learn to program something like X' will be 'Ocaml'.
Many of the practical reasons given for Ocaml's limited popularity (lack of mature libraries, debuggers, IDEs, etc.) are addressed by Microsoft's official support for F# as a first class .NET language. It will be interesting to see if F# helps bring about a greater level of popularity for functional programming.
I believe the core of the problem is politics. The Ocaml developers are principally interested in research and do not have the resources to provide and maintain a rich library. However they're also unwilling to release control of the product to the community which does have these resources, the result is that several attempts to solve this problem relied on non-existant cooperation and funding of third party libraries and these attempts failed. Batteries will fail for the same reason, unless the Ocaml developers change their attitude.
I use Ocaml to develop my product, and I have a simple rule: minimise dependence on third party code. When a third party item is useful, if at all possible, incorporate the source codes directly in the package. For example OCS Scheme and Dypgen are essential parts of the Felix parser, so they're copied into our sources so we have some control over them. The control is somewhat illusory (since Dypgen at least is so complex it is unlikely we could maintain it, but at least we have a copy we think works :)
I won't use batteries because the licence is restrictive, so I can't copy the source, and I have no faith in the long term viablility of it as a stand alone product: the only way I could use it is if it were incorporated directly into the standard distribution of Ocaml.
In the C++ world, I might just consider using Boost: although it is a third party library not part of the Standard, it has such heavy community support and it is actually excellently synchronised with the Standards development process. Ideas developed and tested in Boost become the kind of existing practice that can be standardised, and the standards process is open enough to allow community participation.
Ocaml has obtained the popularity it actually has because it is such a fine product, but that is not enough to allow it to become a mainstream language. Java is crud, it was made popular with billions of dollars of marketing and library development, but in the end had to be released to the community to survive at all.
If you want a language to use in real-time embedded systems, you need pointers and you can't afford a GC.
I think the main reason is that too few developers know OCaml.
And when talking to other developers (those who heard something about Ocaml) I always get the impression that they think of OCaml as an "education-only" language... sad but true
Well, maybe F# becomes popular.
It doesn't help that c->ocaml is a larger mental transition than c->lisp. I've considered ocaml a couple of times, and always found that the cost/benefit just wasn't there for me, so set it aside again. It wasn't the constructs that made it look hard, those actually looked really neat. It was trying to learn an entirely different meaning for '!'. Lisp at least looks so different that it's easy to avoid misinterpreting small pieces of it as c.
IMHO, I think a big problem of OCaml it's not in the language (that is great) but in the people that develops it and by consequence, his license:
They use the Q Public license for the compiler! Yes, the license ex-Trolltech used for Qt libraries! Forget about getting any contribution with such a license.
If you were checking the Language Shootout ( http://shootout.alioth.debian.org/ ) around 7-8 years ago, OCaml was just behind C and C++ for speed of execution. In the while, other languages (like Haskell) got a better compiler (due to a different community approach, I suppose) and now OCaml speed of execution is not so great as in the past.
Shortly, I would not use OCaml, because I don't see it going anywhere better without some really good hackers creating an OCaml compiler that has a REALLY open source license and a community with a REALLY open source behaviour.
I like O'caml a lot...I've implemented a bunch of things using it, compiler, interpreters, system to communicate with C...
when I learnt it, the main problem was that the error messages are not really clear...so for instance, at the beginning i was not really sure when to put ';' and that was really hard to find that actually the ; was misplaced...