Linux

Pages: 1234
Each program or library (each package) should have its own directory.


But they do have their own directories.

Mixing everything together is nonsensical and makes it hard to install multiple versions side by side without putting version numbers in filenames (which is also a pain).


No need for any of that either - version numbers are in the directory name.

Installation does everything properly, that's why we have package managers. There are GUI package managers as well, so that is even easier - not that doing it from the shell was hard anyway.
@TheIdeasMan: you must be thinking of how Windows does it properly. I'm talking about how *nix mixes everything all together. Otherwise there wouldn't be a need for GoboLinux.
Last edited on
I'm half tempted to make an article of how Linux is not always the most convenient of development platforms...

Starting up your application development, the most obvious convenience that Linux provides is standardized paths that compilers use without the need for explicitly listing those paths. For instance, the compiler knows to search /usr/include for inclusion files and /usr/lib for libraries.

There are problems with this, especially now. Let's take a hypothetical library for example... let's say there's a library file called "libbob.so". This library is used by various programs and is meant to be installed as a shared library within /usr/lib. This way, when a program compiles he only needs, "gcc -lbob main.c". Easy right?

What happens when bob needs an update? Well, this is where shared libraries shine, right? You can just replace the shared library without the need to re-compile the application, as long as the library didn't change their ABI. But... what if the library did change their ABI? libtool (google it) has ways of versioning... but libbob.so can only point to one file. Not to mention that distributions now adays almost laughably refuse to allow multiple packages containing different versions of the same software. This in particular is a big "fuck you" to proprietary software where the binaries must be re-compiled from within the closed source environment, something the packagers cannot do themselves. This also doesn't play nice with multiple distributions... which is why a lot of people become frustrated when trying to find the Goldilocks zone for what library version to support that all distributions will provide...

The answer for proprietary is: You don't support any versioned shared library. If you need a specific version of a library, you must statically link it or distributions will fuck you.

This is even more so for C and C++ libraries. As a matter of fact, Steam is currently having this issue right now and it affects anyone using open-source drivers on Linux: https://github.com/ValveSoftware/steam-runtime/issues/13

There are other issues... but in particular, most of this will hopefully end with various upcoming solutions using Linux containers, such as Docker or LXC. It's hard to guess...

In conclusion though, Linux is easier to *start* development on. It's deployment that becomes a severe pain in the ass that makes you want to punch kittens. If you're deploying via source, it's not so bad. People can freely package your source for you and if they fuck up deployment, that's on them. If you're deploying via binaries... prepare for a bit of hell.
you must be thinking of how Windows does it properly. I'm talking about how *nix mixes everything all together.


Absolutely not. My g++ include files reside in /usr/include/c++/4.7.2 that was a default location, I have Qt in /opt/Qt5.1.1 and QtCreator in /opt/qtcreator-3.3.1. For those I specified /opt but the rest of the path came from the package. I need to update though, these are a bit old - I don't have c++14 with g++4.7

I have been in the habit of installing software in /opt so it can stay there when I upgrade to a new version of the distro, and use it from there if I test a different distro.

Otherwise there wouldn't be a need for GoboLinux.


I don't see the need for GoboLinux in that respect. It has never been a problem for me, and I have been using various Linux / Proprietary UNIX since 1989.

As I said, the package manager works really well and doesn't cause any problems at all.

For example, If I install a new distro form a DVD, there will probably be about 800 packages that need updating because the DVD is a little out of date from when it was attached to a magazine in the UK and travelled all the way to Australia. Any way, I can get the system to update everything, it finds all the packages and their dependencies and installs everything. I set it going, and in the morning it's all done.

Just wondering what do you mean by mixing everything together? What system do you have?

I guess I have been a fan UNIX since 1989 where, at work we had high end Sun Sparc Stations for doing CAD work. When I say "high end" I mean they were 32bit with 8MB of RAM, with 300MB HD, all up about $100K worth for 10 workstations. In contrast, PC's were XT8086 or AT8086, or even the fancy 286, with 1MB of ram and about 40MB of HD.

Now the big thing for me was the shear amount facilities available on UNIX - there were hundreds of different commands available in the shell, whereas MSDOS 3.0 had about 20 simplistic commands. It may seem an unfair comparison like one's average second hand car compared to a $500K sports car, but that disparity evaporated completely with the advent of Linux in the early 1990's especially seen as Linux offered all that for free.

But there are disadvantages : For example I was recently using AutoDesk Civil3D (A really good Road Rail design package) at work - it is worth about $12,000. I doubt whether I could find anything like that for Linux, if there was, it would be a lot more money, and possibly only available under proprietary UNIX which is itself rather expensive. What I have found is BricsCad (A normal CAD as opposed to a Civil Design one) which costs $1,000 for the full version and is very similar to but has better features than the full version of ordinary AutoCAD which is about $4,000. The Light version of AutoCAD costs about $700 to $1,000 IIRC and it is fairly simplistic compared to the others.
I'm talking about running programs from the shell. How do you specify which version of a program you want without a full path or a version number in the filename? How do you deal with the PATH nightmare?
TheIdeasMan wrote:
My g++ include files reside in /usr/include/c++/4.7.2
Let's say you download a library and build it from source, then install it. Where do you expect the headers to be installed to? What about the libraries? The executables? By default, they all get shoved in folders with other software that is completely unrelated.
How do you specify which version of a program you want without a full path or a version number in the filename?


Binary executables that normal users can run are in /usr/bin. There is a $PATH environment variable that is set for each user, so that these executables can be run from anywhere - so no full path required. It is not recommended to add lots of paths to the $PATH environment variable. The executables themselves have the version number in their name, it comes from the package - so no need to do anything manually with that, and that is a sensible thing. Shortcuts on the desktop refer to the correct file.

With programs that I might personally build - say to help someone on the forum, or my own projects in dev stage, then I have to be in the directory that has the executable in it. This is a security feature.

How do you deal with the PATH nightmare?


No path nightmares.

Where do you expect the headers to be installed to? What about the libraries? The executables? By default, they all get shoved in folders with other software that is completely unrelated.


Although these 3 things will be in 3 different locations, it's organised. Executables are all in one place. Note that executable means the 1 and only file that launches the entire application. Libraries are put in one place (typically /usr/lib64), but each application has it's own directory with a version number in the name. Headers are in one place as well, and have their own directory e.g. boost is in /usr/include/boost/

Things don't have to be placed by default, as I said earlier - I install new software to /opt

In Windows, where do all the shared dll's go? And shared applications Like MS Office apps? If one has a separate folder for everything related to 1 application, then in order to share there would have to be lots of paths in the path environment variable, or there is a lot of info in the registry.

I guess it's just that there are different paradigms - like C++ versus Haskell or even C versus C++ for example, and one has to get used to using different paradigms. Both UNIX and Windows have their way of doing things, and it is probably a very long debate to prove whether one is better than the other. I happen to have liked the UNIX paradigm better, for the last 30 years or so.

Another thing for you to consider is your first real job : would you turn down a really good job just because it involved working with UNIX / Linux, or would you hold out for a possibly shittier job but on Windows, or hold out for an indeterminate amount of time (years maybe) for your version of a perfect job?

Also, not everything is cutting edge in IT, there are lots of people who deal with legacy things - they limp along continually patching up old nightmare applications. Other people earn real good money by maintaining applications written with COBOL and particular versions of assembly.




TheIdeasMan wrote:
The executables themselves have the version number in their name, it comes from the package - so no need to do anything manually with that, and that is a sensible thing.
I disagree. The problem is that so much software looks for the programs without the version numbers and doesn't know what to do when you have version numbers in the filename (which is an ugly hack in and of itself). Then you have to do even uglier hacks like symlinking the program name without the version to the version you want to use.
TheIdeasMan wrote:
With programs that I might personally build - say to help someone on the forum, or my own projects in dev stage, then I have to be in the directory that has the executable in it. This is a security feature.
You misunderstand, I am talking about libraries like SDL, SFML, FreeType, etc. - where do you install those to? What do you pass to -DCMAKE_INSTALL_PREFIX? Then what about when you build other things that depend on them? By default they assume you installed it globally mixed in with everything else.
TheIdeasMan wrote:
Although these 3 things will be in 3 different locations, it's organised.
That's not what I called organized. Why should someone else's idea of 'organized' be forced upon me? Why did the GoboLinux people have to completely modify the OS just to avoid that default?
TheIdeasMan wrote:
Executables are all in one place. Note that executable means the 1 and only file that launches the entire application.
You assume that a single application is made up of only one executable.
TheIdeasMan wrote:
Libraries are put in one place (typically /usr/lib64), but each application has it's own directory with a version number in the name. Headers are in one place as well, and have their own directory e.g. boost is in /usr/include/boost/
Why not group the executables, libraries, and headers into the same place? Yes, that's not the way it currently is, but can you explain why the way it is now is better than any alternatives?
TheIdeasMan wrote:
In Windows, where do all the shared dll's go?
DLL sharing isn't really recommended these days - that's an old holdover from when hard-drive space and memory were limited. Because of version mismatches it's a nightmare to try and share DLLs - most applications just use the ones in their application directory because it is safer and more reliable.
TheIdeasMan wrote:
And shared applications Like MS Office apps? If one has a separate folder for everything related to 1 application, then in order to share there would have to be lots of paths in the path environment variable, or there is a lot of info in the registry.
Nope, none of it needs to be in PATH or registry. Each application uses its own stuff which is guaranteed to work.
TheIdeasMan wrote:
Both UNIX and Windows have their way of doing things, and it is probably a very long debate to prove whether one is better than the other. I happen to have liked the UNIX paradigm better, for the last 30 years or so.
I'm not trying to convince you otherwise, I'm just saying that you shouldn't take for granted that what you are used to is the best possible way. Some people actually go out and defend the registry or the *nix directory layout, rather than accepting that it's just a historical decision we've never changed.
TheIdeasMan wrote:
Another thing for you to consider is your first real job : would you turn down a really good job just because it involved working with UNIX / Linux, or would you hold out for a possibly shittier job but on Windows, or hold out for an indeterminate amount of time (years maybe) for your version of a perfect job?
I would of course accept any job where I have the necessary experience. I don't have a very strong aversion to *nix, it's just that there is very little reason for me to pick between *nix and Windows and the directory layout ended up being the deciding factor.
TheIdeasMan wrote:
Also, not everything is cutting edge in IT, there are lots of people who deal with legacy things - they limp along continually patching up old nightmare applications. Other people earn real good money by maintaining applications written with COBOL and particular versions of assembly.
Yeah, not sure what that has to do with the discussion though? On that note though, since it's pretty clear that people don't update systems I really don't understand the fear of breaking reverse compatibility. It seems like the only way we fix the mistakes of the past is by creating entirely new systems with mistakes of their own instead of just fixing what we already have.

EDIT: See also:
http://tiamat.tsotech.com/directory-layouts-suck
Last edited on
The problem is that so much software looks for the programs without the version numbers and doesn't know what to do when you have version numbers in the filename (which is an ugly hack in and of itself). Then you have to do even uglier hacks like symlinking the program name without the version to the version you want to use.


A few things here: The beauty of the package manager, is that it knows what an applications dependencies are, and will find and install whatever is needed to make it work. This usually means libraries. It is especially aware of versions in this process.

For actual executables, the application installer has a shell script which determines whether a particular executable or subsystem (a compiler say) exists. If it doesn't, the installer can install them as well.

Some things aren't named with a version number. For example boost is in /usr/include/boost and that is the current version. It is a massive duplication to want to keep a previous version of boost - the diff is probably fairly minimal compared to the size of the whole thing. One can always have a back up of it though.

You misunderstand, I am talking about libraries like SDL, SFML, FreeType


I made that point to contrast what happens with personal programs as opposed to actual software like the ones you mentioned.

where do you install those to?


I personally put them in /opt those directories have files the app needs, it finds the libraries in the standard place, as mentioned earlier.

What do you pass to -DCMAKE_INSTALL_PREFIX?


Well those options exist to allow customisation of where things a put, but often that is unnecessary, and often make and install are done in the directory where the software is installed.

Then what about when you build other things that depend on them? By default they assume you installed it globally mixed in with everything else.


The install script will go looking for what it needs, otherwise there are the switches to specify where things are. To avoid that, it's easier to go with default locations, I wouldn't put library files in some other non standard place, unless I specifically wanted to keep it separate.

That's not what I called organized.


Well, you perhaps need to look at it differently. You are kind of looking at it from a "All the stuff I ever need is in my room", whereas in a shared house, kitchen, bathroom (not always), TV room , office etc are shared spaces. Your view sounds like each of the 4 household members should have their own self contained 1 room apartments, in the house.

Why should someone else's idea of 'organized' be forced upon me? Why did the GoboLinux people have to completely modify the OS just to avoid that default?


Each to their own, I guess. Remember UNIX has been used by large organisations and many individuals through Linux for 40 odd years, so all these people have figured out that it works. If some group wants to organise it their way that's up to them.

You assume that a single application is made up of only one executable.


In /usr/bin there is exactly 1 executable for each software package, and they are all quite small - about 500 - 800 KB. That executable knows where to find other files in the directory where the software is installed, and library files in their standard location. Often they are links to some other executable in the installed directory. When or if I need to make a short-cut on the desktop, I go looking in /usr/bin not in multiple places, system files are organised like that too, so they can be found by scripts.

Why not group the executables, libraries, and headers into the same place? Yes, that's not the way it currently is, but can you explain why the way it is now is better than any alternatives?


Because they are all optionally shared. Linux does not have the fiasco of lack of support for version dependency, because it has package managers that know all about versions and can find dependencies.

DLL sharing isn't really recommended these days


See my last point.

I'm not trying to convince you otherwise, I'm just saying that you shouldn't take for granted that what you are used to is the best possible way. Some people actually go out and defend the registry or the *nix directory layout, rather than accepting that it's just a historical decision we've never changed.


Well that goes both ways, doesn't it? You seem frustrated here about the UNIX system, but elsewhere complain about the difficulty of linking custom builds on Windows. So I am not seeing much acceptance / agnosticism on your part.

On my side (I am probably just as less agnostic as you seem to be), when I use windows as an ordinary user, I use the software and don't worry about anything. I did try a bit of developing of small things on Windows, but I found the free versions of VS just annoying. Maybe it was the lack of standard compliance, and the sense of being corralled towards .NET or C#. I was trying write code for AutoCAD, which seemed rather awkward to do with C++.

Another way to look at things: There are auto transmissions and manual transmissions, if we metaphorically assign those to Windows and UNIX, then realise that there is also adaptive auto transmissions, which allow one to select a gear, otherwise the transmission will do things like change down gears automatically as the car slows, rather than staying in a high gear like an old auto would do. Then we could ask how close is each OS to the adaptive auto?

I don't have a very strong aversion to *nix, it's just that there is very little reason for me to pick between *nix and Windows and the directory layout ended up being the deciding factor.


I think there is a strong reason for you to try Linux - The relative ease of Building & Linking & Installing, which you seem unhappy with at the moment. The balancing factor might be education - there is a strong learning curve for Linux, it is a different paradigm. I am sure a smart young man like yourself could take to that like a duck to water - given your already substantial experience.

So why not take the plunge - build something on Linux. See how you get on.

Also, try building something with Qt, then use that same code base to target another OS.

Yeah, not sure what that has to do with the discussion though? On that note though, since it's pretty clear that people don't update systems I really don't understand the fear of breaking reverse compatibility. It seems like the only way we fix the mistakes of the past is by creating entirely new systems with mistakes of their own instead of just fixing what we already have.


I mentioned that because you might be faced with applying for a job that does involve continually patching up a nightmare app. And despite having a degree, experience is gained on the job.

Sometimes consultancies prefer to have 10 people with essentially life time jobs on an essentially almost broken system, rather than propose to spend 12 months to have a brand new system. Fixing the old system to make it equivalent to a new one is impossible, because of it's poor design. Management also seem reluctant to start afresh sometimes. When they do agree to start again, the new system has problems of it's own, as you said.

I read the link you posted, seemed reasonable enough. Noted that the replies all seemed to bag the windows folder names. But that is probably related to the vast majority of users being Windows users.

Anyway, a good discussion IMO :+)

Regards
TheIdeasMan wrote:
Your view sounds like each of the 4 household members should have their own self contained 1 room apartments, in the house.
Yes, that's exactly what I want. Perhaps it is related to me being somewhat antisocial? :)

TheIdeasMan wrote:
In /usr/bin there is exactly 1 executable for each software package
I guess the compiler and the linker are two entirely separate software packages? Either way, what you are saying has not been my experience at all.

TheIdeasMan wrote:
Well that goes both ways, doesn't it? You seem frustrated here about the UNIX system, but elsewhere complain about the difficulty of linking custom builds on Windows. So I am not seeing much acceptance / agnosticism on your part.
The difficulties on Windows are caused by build scripts that don't even know what Windows is ;p when the build scripts are platform-agnostic like with CMake, it's just as easy to build on Windows as on *nix. The entire point of that thread was to point out how developers who primarily use *nix can't even be bothered to make their programs platform agnostic.

TheIdeasMan wrote:
I think there is a strong reason for you to try Linux - The relative ease of Building & Linking & Installing, which you seem unhappy with at the moment. The balancing factor might be education - there is a strong learning curve for Linux, it is a different paradigm. I am sure a smart young man like yourself could take to that like a duck to water - given your already substantial experience.
I already use *nix in VirtualBox. It's not difficult at all, I just don't like the way things are handled as per this discussion. There are other things I don't like about linux that I have not mentioned, but they balance out with things that I don't like about Windows.


About package managers: I'm not very much a fan of them. I have this problem with Windows too - the fact that uninstaller programs have to exist at all is a pretty big issue IMO. Windows and *nix solve it in different ways: Windows has installers and uninstallers where *nix has package managers. Either way, you end up with the inability to just simply remove a directory for a program and call that uninstalled - Windows has registry stuff and *nix has package manager metadata. Both of those send chills down my spine.

It is my belief that a better standardized directory layout would alleviate the need for installers and package managers to store metadata. But that's just a belief.
In /usr/bin there is exactly 1 executable for each software package
This is absolutely, completely false. There is in fact no relation between the number of packages installed and the number of executables in /usr/bin.
* What about metapackages (packages that simply install a set of packages)?
* What about packages that install several executables?
* What about packages that install no executables (libraries, assets, documentation)?
* What about executables that were make installed into /usr/bin?

My experience deploying on Linux has been similar to NoXzema's. Linux is okay to develop on, as long as your code will only ever run on your own system, or on systems that are exact clones of your own. As soon as you want to add a little bit more support, things get very complicated very quickly.
helios wrote:
My experience deploying on Linux has been similar to NoXzema's. Linux is okay to develop on, as long as your code will only ever run on your own system, or on systems that are exact clones of your own. As soon as you want to add a little bit more support, things get very complicated very quickly.
This is what I have heard too:
http://eev.ee/blog/2015/09/17/the-sad-state-of-web-app-deployment/
I heard someone else refer to Docker as "here's a VM image of my dev workstation" X)
Last edited on
I haven't done much developing on Linux (if any at all) but so far, the only real benefit for me doing so has been that libraries actually work. I say work as in actually link properly. Sadly, I'm still trying to compile things that run on Windows there, so all of this may be moot unless I can get past the complexities of cross-compiling.

Stupid nonfunctional libraries, making my life hell.
@helios

Ok you have got me there, it's not a 1 to 1 relationship, but it doesn't take away from determining whether something is installed or not, either by script, package manager or eyeball.

The original assertion by LB was that the directory layout in *nix makes things harder

@All
With the changing ABI library file thing that NoXzema mentioned: I don't have any experience with that, but it just sounds like changing ABI is a major fork in the road, maybe this next assertion is naive, but would it not be better to re release the software to a new version with new libraries? I mean that one has changed horses mid stream - it's a bit much to expect the carriage is going to always make it to the other side :+)

@LB

I read that blog. It appears the author made some grave mistakes, as pointed out by Aigars Mahinovs near the end. The mistakes were having a 64bit system with a 32 bit user space, then when it didn't work, tried to guess how to install it. The nightmare continued from there.

It seems that package managers work. For example, if the new software requires PostgreSQL 9.5, but one has v9.3, then the package manager will find it, and upgrade it.

If however, one is building from source, then the documentation should say what is required in order to build, and it is incumbent for one to get those pre-requisites.

For example, I built llvm on my system, there were some things I didn't have, so I installed them and everything went fine. I also built gcc in similar fashion.

@Ipsil

Have you tried Qt ? It is supposed to use one codebase for different targets, I don't actually know how tricky it is, just thought I would mention it.


Well, Qt did somehow wind up on my Arch install (gotta love dependencies), so I'll try looking into that. Considering that my computer is haunted (I'm getting crash-inducing corruption in game saves that can't be recreated but the save will crash on others' computers, horrible graphical glitches in another game that are only solved by turning on vsync), it's probably related.
@Ipsil

I don't know what Qt actually does: presumably if targeting Windows, it makes some exe's and dll's, but whether it goes as far as making an actual installer - I don't know.

The original assertion by LB was that the directory layout in *nix makes things harder
It does make things harder, because there isn't a single subtree in the hierarchy that contains all the files for a specific package. Unless you already know into which directories the package installs to, you can't know where all its files are. And let's not go into how each distro can lay out its hierarchy in novel and interesting ways, to the delight of developers everywhere!
Granted, Windows programs that install strictly into a single subtree and nowhere else are not all that common, but this pattern is more natural here than in Linux.
Besides that, having a single, unique location for all libraries to go to works fine for a single-purpose system like a server, which will run for prolonged periods of time without changes to its software configuration, but is terrible for a desktop machine which may constantly change the software it uses.
Unless you already know into which directories the package installs to, you can't know where all its files are.


I think it is just a matter of education, it's not hard, and lack of a small bit of education is not a reason to abandon something. Another analogy: it's slightly harder to drive a manual car, rather than an auto one, but that's no reason to refuse to learn to drive a manual car. Hell, maybe this sums up the whole debate :+D

And it is not impossible nor difficult, to find out where an applications files are.

From an ordinary user's perspective:
One can query the package manager the rpm files themselves have all the necessary files inside them - like a zip file;
Use the ldd command to find libraries;
Or simply look in a small number of common and standard places;
Or use the find command as a last resort;
Or look at the properties of the shortcut on the desktop.

But if we are to talk installation:

For example, say I want to install your software on my system, and it needs llvm. First up, your documentation specifies that llvm is needed, and mentions an optional argument which specifies where the files are. So to make it a harder example you didn't package your software, in that case it's really easy, the package knows what it's dependencies are. Your installation script needs to check if I have llvm somewhere. Your script can use the query commands of the package manager. If not, (I didn't use rpm to build like I probably could have, I built it from source) It's not hard to make up a shortlist of directories where things might normally be installed. As I understand it, most distro's have a set of directories which are always there - /bin /usr/bin /opt etc. Once you have an executable, you can use the ldd command to list it's library dependencies.

When I built llvm and gcc, the install scripts went to quite some lengths to see that various commands and indeed options to those commands were available.

Besides that, having a single, unique location for all libraries to go to works fine for a single-purpose system like a server, which will run for prolonged periods of time without changes to its software configuration, but is terrible for a desktop machine which may constantly change the software it uses.


Why is it so terrible? If software is installed and uninstalled with a package manager as they should be - then it is just like (possibly better than) windows with installation and uninstall programs, then I don't see why there would be a problem. Note that package manager uninstall checks for dependencies on it's library files by other applications as well. And it doesn't leave orphaned library files, unlike windows apparently does - which was the case at one stage, but I don't keep up with such things these days.

It's possible to update one's Linux system everyday - security updates, bug fixes etc. It doesn't cause problems.
Another analogy: it's slightly harder to drive a manual car, rather than an auto one, but that's no reason to refuse to learn to drive a manual car.
It's more like a car with the steering wheel on the dashboard and the pedals behind the back seat. Obtuse for no real reason. Just because I can adapt to it doesn't mean it's good design.
Another example: configuring ddclient on Debian requires changing /etc/ddclient.conf and /etc/default/ddclient. These files are completely unrelated in every possible way and you have to change them to get the service to behave reasonably. Is there any technical reason why they couldn't just be in /etc/ddclient/? Or, hey, how about naming directories something actually meaningful and putting them in /conf/ddclient/?

All the rest of your post is based around the assumption that having a system fully managed by a package manage is possible, which is not always the case:
* Not all software is available from repositories or in installable packages.
* The repository may not contain the latest version of the particular package. Perhaps not even the latest major version. Debian for example is particularly notable for this.
If the package manager fails you you have to compile from source (binary compatibility? Pah! Who cares about that? We need more file systems), and then the dependency hunt is on.

PS: Please excuse the tone. I'm a bit annoyed for reasons unrelated to this thread. I do think the complaints are legitimate, though.
Last edited on
All the rest of your post is based around the assumption that having a system fully managed by a package manage is possible, which is not always the case:
* Not all software is available from repositories or in installable packages.
* The repository may not contain the latest version of the particular package. Perhaps not even the latest major version. Debian for example is particularly notable for this.
If the package manager fails you you have to compile from source (binary compatibility? Pah! Who cares about that? We need more file systems), and then the dependency hunt is on.


That is exactly what has happened on my system, the reason is that I am 5 versions behind for the distro - I have Fedora 17, the current release is F22. So no binaries or packages for current releases of llvm and gcc for my old distro. So I built them myself, but I could have apparently built a package myself from the source with rpm - I found out about that later.

Any way, I read the documentation, installed utilities necessary for the build, and make install took care of the rest.

gcc could be a bit tedious though, only because my current one is so old, as is the distro. However it's not impossible to upgrade further, it's just that I need to use 4.7 to build 4.8, then use 4.8 to build 4.9 and so on up to 5.2 :+) It's easier to just install F22 and those kinds of problems and many others go away. I should really upgrade every six months - that would be easiest.

I might have even bigger problems, my Laptop is clunky with a first gen i7 chip, some parts of it may not be hardware compatible with the latest distro, hopefully not, but then again a better laptop would be great. It's not unreasonable to get a new one after 6 years.

So everything I have is old, but I still don't have problems building and I don't, rather can't, rely on a package manager, apart from using it to make packages from source. For example, I could make a rpm file for gcc 4.8 from source.

Any way, I guess we will have to agree to disagree, otherwise we could go on for years. I have some gardening to do, and F22 to install after that.

Regards :+)
> This in particular is a big "fuck you" to proprietary software
I'm fine with that.


> How do you specify which version of a program you want without a full path or
> a version number in the filename?

> The problem is that so much software looks for the programs without the
> version numbers and doesn't know what to do when you have version numbers in
> the filename
yes, it seems that the solutions involve symlinks and env
It would be nice to simply say `python2=python2.5' before running the scripts.

¿How does the "each package has its own directory" solve this?


> All the rest of your post is based around the assumption that having a system
> fully managed by a package manage is possible, which is not always the case:

> If the package manager fails you you have to compile from source
¿can't you create a package to be administred by your package manager?
like this https://wiki.archlinux.org/index.php/Creating_packages


> Otherwise there wouldn't be a need for GoboLinux.
https://en.wikipedia.org/wiki/GoboLinux
/System/Index/{bin,include,lib}
Contains links to files from each program's {bin,include,lib} directories.

¿diff?

> Why did the GoboLinux people have to completely modify the OS just to avoid
> that default?
GoboLinux uses symlinks and an optional kernel module called GoboHide to achieve all this while maintaining full compatibility with the traditional Linux filesystem hierarchy.

doesn't seem like a big modification (not saying that they didn't do anything, the hiding part is quite interesting, but could live without it)


Have some problems with its page, ¿how do you remove a package? They say to use rm -rf, ¿but what about the symlinks? ¿and the dependencies?
Last edited on
Pages: 1234