Tutorial :Computer Science problems that are still problematic [closed]


Aside from those mentioned in Wikipedia (Unsolved problems in computer science), what are other Computer Science problems that has yet to be solved?

I thought about asking this question because other great minds out there might not be aware that such problems exist.

(Set to community wiki; one CS problem per post please)

Those that are posted in Wikipedia are:


I still haven't figured out where the Any Key is.

Ok, in all seriousness (and in order to contribute something worthwhile) how about the problem of applying parallel computing to "serial" tasks? The theoretical limits of serial computation are being reached, whereas parallel computing has no theoretical limit. Applying parallel computing to serial problems is very difficult however. For example, a serial problem might require a series of calculations, with the results of each calculation in the series relying on the result of the previous calculation. How do you accomplish this task in a parallel manner?

This article illustrates things from a theoretical point of view and brings up the notion of speculative computing as a possible solution (with a neat point about human brains). This is a very new area, however, and solutions don't come easily.


Finding agreement on which programming language is used to solve what problems.


Natural language processing that actually works. Can't believe this hasn't been mentioned yet.


The Open Problem Garden has a small list you might be interested in:


Graph isomorphism.

Basically, most naturally occuring problems are either easy (P) or probably hard (NP). There were, if memory serves, 2 or 3 problems that fell "in-between". Primality was one, but that was recently proven to be in P. Graph Isomorphism is th eother.

Graph isomorphism is this question: Given G1 and G2, is G2 simply G1, but "relabelled"? Equivelantly, can we relabel G2 so that it is the exact same as G1?

Wiki articles! A general overview, and the article concerning the issue of its complexity class here.

Edit: I really have to remember to type ALL the words.


Dynamic optimality for splay trees.

Fix a set of queries into a binary search tree. ("Find node 6. Find node 13. Find node 42"...) Splay trees are statically optimal: if you create a fixed binary search tree and run the queries against it, it will run no more than a constant factor faster than running the queries against a splay tree.

This is somewhat comparing apples to oranges, since a splay tree is not a static tree. The open question is whether splay trees are dynamically optimal: is it within a constant factor of a tree that can modify itself during the queries?


ORM is the Vietnam of Computer Science -Ted Neward

Which is to say, it isn't solved to many peoples satisfaction.


Successfully winning against humans in the game of Go. Wiki Article about computers and go.


Binding UI elements to a database.

There are many miserable attempts out there and, though I hate to say it, .NET is probably the best one today. Just think about it: After literally 30 years, it's still a pain to build a simple editor for a person with several addresses.


An interesting thing here would be the definition of problem. A problem is simply room for improvement under given resources (and not proven to be unsolvable). so by this new definition we have a lot of problems in every field.

A problem may be to improve a factorial complexity solution to exponential complexity solution. (If its not proved that it doesnt exit).


how to improve the NetFlix algorithm to the NEXT 10% :) (congratulations to The Ensemble!)


Note that the Church-Turing thesis is not actually, really a statement about mathematics. It is a statement about the physical world.

The closest you can come to a proof of it is something like "it is true under the standard model".

This isn't to say it couldn't be formalized to a larger extent, but the best you can hope for is to clarify specific assumptions about the physical world.


The traveling salesman problem i believe still remains unsolved.


Re-parsing the answers, I think I found the combining element of at least 2 previous answers is the problem of breaking the barrier between syntax and semantic. Which is the problem actually every programmer and computer scientist is working on. (Lately "semantic" is increasingly appearing as topic of whole CS-areas.) Most of the fields and topics we have opened up start with the promise to break this barrier. Up to now all of them sooner or later they reduced from "creating intelligence" to "intelligent algorithms".

AI is probably the research area where this has been most prominent, but in the end, many other people have been dreaming of what is basically a "Do what I mean"-button. (I could fit in the evolutionary algorithms, neural networks, and lately the semantic web-people in here.)The main hindrance being that everything a computer does is shifting bits.

I am probably spreading prejustice and foolery here, because for materialists this is not a fundamental problem, because shifting bits is probably all we are doing in the human brain. It simply might be a problem of complexity.

Well... I am not willing to start that discussion here, and besides syntax vs. semantics is a quite general topic. Spending too much time on this definitely keeps one from solving some of the more specific problems mentioned in other answers. Tackling these is much more effective but it helps to keep in mind that there are very fundamental barriers here, that we are not (yet) able to break through.


P =? NC would be my vote. Automatic manycore parallelization would be possible under P = NC, but it's believed that the two are different and hence there are P-complete problems that are hard to parallelize. Understanding which problems fall into this class is increasingly important.

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »