Ubuntu: Why are the operating system and kernel treated separately in Linux? [closed]


When it comes to any Linux operating system, including Ubuntu, people tend to distinguish terms kernel and operating system. This is correct for Windows and the OS X family, but why is is so widespread among the Linux-community? Is there a way to update the OS kernel, without having the OS itself updated? Or vice-versa? If so how can that be useful?


The whole GNU/Linux system is built up using the modular approach. You can mostly upgrade (replace in general) a single module without touching others. The module in question can be a bootloader, kernel, shell, command, desktop environment, GUI application, whatever…

Of course, it is true as long as you are able to manage dependencies correctly. In the set of distributions around Ubuntu, APT is used to resolve dependencies automatically.

You can install another kernel version using the command:

sudo apt install linux-image-<version>  

As long as APT allows it, you should be able to reboot and use selected version of the kernel, be it generic, lowlatency etc. Or you build a kernel version yourself, e.g. the Real-Time Linux, and use it with your current system.


As you know Kernel is an important part of OS, in GNU/Linux distributions you can easily update the kernel without touching other part of the OS. However we are simply updating a part of our OS.

An operating system is made of two part, kernel space and user space.

So yes, you can update your kernel space without touching your user space if only the new version is compatible with your current user space.

And about updating user space tools, it's another yes.

When you run:

sudo apt-get upgrade  

If there was an update available for kernel you will get:

The following packages have been kept back:    linux-generic linux-headers-generic linux-image-generic  

so you are only updating your user space and when you run something like

sudo apt-get dist-upgrade  

you are updating everything including the kernel.

To upgrade only your Kernel to a newer version use something like:

$ apt-cache search "linux-image-[0-9]+.*-generic" | awk '{print $1}' | head -4  linux-image-4.4.0-21-generic  linux-image-4.10.0-14-generic  linux-image-4.10.0-19-generic  linux-image-4.10.0-20-generic  

to find a list of newer kernels, then install it as a new package, for example:

sudo apt install linux-image-4.10.0-14-generic  


First, a few clarifications, because i sense you don't understand how GNU/Linux systems came into existence. Bear with me if this is nothing new for you:

"Kernel" is not just another program that runs, but it is the part of the OS providing the base functions: if you want to start a program (say, you type "ls" at the command line) the binary has to be loaded from disk (that includes some filesystem operations to locate it and some file handling to´read it), then a "process environment" is created: memory gets assigned, a process number is issued, etc., etc.. All the former activities (FS, reading from file, ...) are handled by system libraries, but the latter ones are kernel functions. In some sense the kernel "is the OS" and everything else is just decoration around it.

"Linux" is in fact (only!) a kernel with no other parts of an OS around. Linus Torvalds started writing it by taking Andrew Tanenbaums MINIX template OS kernel and completing it so that it was a fullblown and real workable kernel. To this day there is Linus (and many others who contribute/have contributed) who develop this kernel. This kernel is still very similar to UNIX, but NOT a UNIX kernel.

"GNU" started as an initiative to "make better" many common UNIX commands. I won't discuss if they succeeded or not, but they definitely wrote a lot of software and at one point had a collection of utility programs. They even started to develop a OS kernel of their own (HURD), which was based largely on UNIX, but was definitely different. But to this day HURD is in its early development and hardly a working solution. "GNU" btw. is short for "GNU (is) Not UNIX" - they tried to overcome some (perceived or real) limitations of UNIX with the intention of creating a successor to UNIX (again: i don't want to enter the discussion if they succeeded or not - i don't care if it is "better" or "worse", but it is definitely different!).

So, with a set of tools lacking a kernel and a kernel lacking a toolset it was a natural development to put these two together: GNU/Linux was created.

Still, to have working (and workable) OS you need more than just a kernel and a toolset: you need a package managing system, you need installation procedures, you need template configurations, you need ....

Several different people (or groups thereof) came to this conclusion and used the GNU/Linux combination to create a GNU/Linux-system of their liking, by adding exactly the things i spoke about above: they created a package manager, a packaging system, installation procedures and what more. These different groups (respectively the results of their efforts) are what the different distributions are. Today there are three different package managers in place (apt for Debian and derived systems like *ubuntu, rpm for RedHat and derived systems like Fedora, CentOS and more, pacman for ArchLinux) but all these just manage packages of software which is (essentially) the same: what is called when you issue "ls" or "df", etc., on a Debian-system or a RHEL-system comes from different packages but essentially it is the GNU-version of the "ls"-("df"-)program, just differently packaged.

So, "in principle" you can update the kernel alone, like the people who created a distribution from various versions of all the software i spoke above did.

But, and this is a real big BUT: because there is not only the kernel and some additional software but a lot of other things to keep in mind, like system configuration tools (systemd, which some distributions use and some do not), network management tools like NetworkManager, which in turn depends on some versions of the GNOME-library, etc., etc. - a "distribution" is a rather complex thing and chances are if you try to update the kernel you end up updating a lot of other things because of the many interdependencies.

Still, and also "in principle", like above: you can also create your own distribution by downloading all the sources, compiling them, find a working set of version combinations, put some packaging system into place (or use one of the existing ones) - and so on, until you have a distributable, installable and configurable system. This is what the creators of distributions like Ubuntu do and its not a miracle - just a lot of complex work, so in reality most users shy away from that and use something they can by ready to use.

I hope this answers your question.


The simplest answer has nothing to do with Ubuntu; it is related to the way GNU/Linux is built. If you try to look at it as a system developer, you'll see two worlds, each separated by a sharp border (the ABI).

The kernel world, where low level developers work, is a system on its own. It has everything you'd normally find in a regular application. The only difference is that the user is not the actual person that is using the machine, but the user-space world. The kernel "application" is the middle-man, the server that is using the machine - the ghost in the shell.

Now, the user-space, is the normal world that everyday user and developer is playing on. It has rigid APIs, rules, files, and, the most important thing, an abstract, childish image of the machine it is running on. Since the user is only seeing this portion, and this amounts for 99% of the distribution size, it is easy to mis-name it Operating System. The right nomenclature is to call it a software distribution, created by some entity (Canonical, Fedora, etc.), using a kernel (Linux, HURD, BSD, etc.), and built using a set of tools (usually provided by GNU).

To answer your question, in GNU/Linux (just like in Windows and OSX, trust me), you can change the kernel, not just the version, but the entire architecture (Linux Kernel, vs HURD Kernel), and, as long as the ABI is not touched, never make a single change in the user world... Back in the day, when the real man had to build the kernel from sources, you could go through several changes like these, to get a crappy USB webcam to work... Now, with the modular kernel, you just have to install a module, and you'll get a brand new kernel world, with the ABI (sometimes) extended with new features...

Again, same for the userspace. When you install a new application, from, let's say, an Ubuntu repository, 99% of the time, your biggest concern is the compatibility of the other userspace components, not the actual kernel. There are cases, where the kernel version dictates (through ABI) the range of stuff that can be installed in the userspace, but the goal (for the developers, at least) is to make this go away...

Another thing to ponder, you can (and it is pretty easy) build your own, special, one-of-a-kind GNU/Linux distribution. Get a kernel, some simple scripts, several apps, and you're set. It is just that easy (take a look at the OpenWRT GNU/Linux distributions, for network gear, the entire distribution fits in like 16Mb, or so).


I guess they are kept separate because the kernel is a critical part. A kernel with a regression, or just a failed update, might do quite a lot of damage. You might want to update it less frequently; or only after waiting some time to make sure no one reports worrying bugs.

Also some advanced or professional users recompile the kernel to modify its behavior to suit better their needs. In such case, you obviously would not want it automatically replaced with factory version every time you upgrade.

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »