Tag Archives: Live CD

LFS 6.8 (Part 12): Preparing to Build the LFS System

Just like I broke up building the LFS toolchain into several parts, I will break up building the LFS system.

Right now I have my temporary toolchain, but to start building the LFS system, I need to make a few changes to get it ready. If I break something major, I will have to start from here with the known-good backup of my temporary toolchain (which I just made). As it is, I’m ecstatic that I’ve made it this far with no troubles.

Preliminary Speak

The LFS book recommends that first-time LFS builders (like me!) should build without optimizations, since those speed gains can come with problems in compilation and bugs in the “optimized” software. Will I go back and try optimization later? Possibly, but not very likely unless I’m testing on a faster system (it took me 30 hours to get this far, so I’m feeling conservative).

The book also emphasizes that I should build all the packages in order, so “no program accidentally acquires a path referring to /tools hard-wired into it.”

Virtual Kernel File Systems

The kernel uses several virtual (RAM-hosted) filesystems, but they are mounted as though they are actually on disk; therefore, I need to create their host directories.

The kernel also requires a few virtual device nodes (those things I mostly don’t understand that crowd /dev) in order to boot at all, so I need to create a few manually.

Most of /dev, though, isn’t populated for the LFS system that I will chroot into. Since /dev is normally filled during boot, and the LFS chroot has never booted, I have no valid system information yet, and if I tried to build anything, the software would be baffled by the complete lack of hardware. I got around this by using a bind mount (a mount that mirrors an existing mount point or directory) of my build environment’s (the LFS live CD) /dev. This even allows me to add/remove hardware on the fly, since the changes in /dev will be mirrored to the chrooted LFS build environment.

Finally, I mount a few more virtual kernel file systems that are empty, but that the kernel will need later, including /dev/pts, /dev/shm, /proc, and /sys.

Package Management Speak

LFS doesn’t include any package management scheme, as the goal of LFS is to teach about Linux’s inner workings. The LFS book does discuss various solutions for package management, along with their strengths and weaknesses, but leaves their implementation up to the user.

Almost as a side note, the LFS book mentions that LFS can be handy for use on multiple computers with similar hardware, and can be deployed with only a little modification. I don’t really know why anyone would do this in the real world; I’d rather use a Linux system with package management of some sort.

Entering the Chroot Environment

So with the temporary toolchain ready and the virtual kernel file systems mounted and mirrored, I am ready to chroot into the $LFS directory and start building the final LFS system!

The actual chroot command is pretty long, but is fairly self-explanatory. The new chroot environment is quite similar to the live CD environment, except that it is tuned to make building with the temporary toolchain more painless (so the system uses the newly built software right away automatically).

Creating Directories

At this point, $LFS is pretty sparse, consisting only of the /dev, /source, and /tools directories (if memory serves). I needed a full directory tree, so I created one with the dozen-odd commands in the LFS book.

Ah, that’s better.

Creating Essential Files and Symlinks

A few final tweaks configure the LFS chroot environment to be ready to build! After adding a few symlinks, directories, and files, I started a new shell (thus fixing the I have no name! in the prompt) and created some log files.

Time to build LFS!

LFS 6.8 (Part 8): Temporary Toolchain Second Pass and Sanity Check

The LFS temporary toolchain is going to take a while to cover completely, so I’m breaking it up into several parts for readability.

Binutils (Pass 2)

To review:

Binutils contains some of the closest “to the metal” utilities that Linux needs, providing low-level translation of programmer intent into machine-readable binary code. Binutils gets built first as GCC and Glibc will need these lower-level utilities in order to function properly themselves.

The first pass of Binutils was a cross-compile, and this second pass uses a few extra options to use only the first pass (cross-compiled) Binutil’s tools and libraries. This removes any traces of library or settings contamination from the build environment (I’m using the LFS live CD). The second pass of Binutils is installed to the /tools directory (in the temporary toolchain, /mnt/lfs/tools), overwriting the first pass version of Binutils.

I estimated Pass #2 of Binutils to take 1 hour and 4 minutes to build, and Pressie breezed through in only 58 minutes.

Not quite everything is right yet: the library linker (ld) I just compiled would incorrectly look in /tools for libraries to build against. To remove the contamination of the temporary toolchain itself, I cleaned out the ld subdirectory and recompiled ld to point to /usr/lib and /lib (which is where libraries will be on my final LFS system). Then I copied the new ld to /tools/bin (in the temporary toolchain, /mnt/lfs/tools/bin).

GCC (Pass 2)

To review again:

GCC is the GNU Compiler Collection, and it is the main utility on a Linux box for compiling programs written in C and C++ (and a host of others that are irrelevant to LFS for now).

The first pass of GCC was a cross-compile, and the second pass of GCC will be an uncontaminated build. GCC required a patch and several tweaks to prepare it for building the final LFS system, mostly by changing various build scripts to use the /tools Glibc libraries rather than the live CD Glibc libraries. These tweaks took quite a bit of typing, so be sure to check several times for any typos before hitting that last Enter. Since Pressie is based on a i586 CPU, I skipped a step that is intended to prevent GCC from linking to x86_64 libraries on a 64-bit host environment. Next, I extracted GMP, MPFR, and MPC into GCC’s source directory for the build process. I needed both the C and C++ compilers to be built, so there are some extra C++-specific configure options that were unnecessary in the first pass. The second pass of GCC installed to /tools (/mnt/lfs/tools in the temporary toolchain), overwriting the first pass version.

I expected GCC to build in 7 hours and 21 minutes, but it took 8 hours and 12 minutes.

Finally, since many programs link to cc (a generic C compiler) instead of to gcc specifically, I created a cc symlink to gcc.

Sanity Check!

LFS recommends doing a basic sanity check on the temporary toolchain before going further. Essentially, I compiled an empty C program and made sure I was using the correct program linker (the new one in /tools, and not the one from the live CD environment), then deleted the test output. At this point, everything is A-OK to-the-letter perfect. Now I’m ready to build the rest of the utilities in the temporary toolchain!

LFS 6.8 (Part 7): Temporary Toolchain First Pass and Adjustments

The LFS temporary toolchain is going to take a while to cover completely, so I’m breaking it up into several parts for readability.

Binutils (Pass 1)

Binutils contains some of the closest “to the metal” utilities that Linux needs, providing low-level translation of programmer intent into machine-readable binary code. Binutils gets built first as GCC and Glibc will need these lower-level utilities in order to function properly themselves.

This first pass of Binutils is a cross-compile, making it refer specifically to the hardware it is being built on, rather than to the hardware the current build environment (I’m using the LFS live CD) was built for. Binutils is installed to the /tools directory (in the temporary toolchain, /mnt/lfs/tools) because right now / (root) refers to the live CD environment.

I put the configure and make commands for Binutils into a time{} container to measure an SBU (Standard Build Unit) so I could then reliably guess build times for other packages. It turns out an SBU on Pressie is 49 minutes and 8 seconds.

GCC (Pass 1)

GCC is the GNU Compiler Collection, and it is the main utility on a Linux box for compiling programs written in C and C++ (and a host of others that are irrelevant to LFS for now).

The first pass of GCC is also a cross-compile. It requires GMP, MPFR, and MPC, so I extracted their sources into GCC’s source directory so they could be used during the build process. GCC is installed to /tools (/mnt/lfs/tools in the temporary toolchain). Only the C compiler is required for now, so that’s all I built.

I expected GCC to build in 4 hours and 5 minutes, but in reality it took 4 hours and 44 minutes.

As a final touch, I fake out the soon-to-be-built Glibc by symlinking libgcc.a to libgcc_eh.a, since apparently libgcc.a has all the things Glibc wants from libgcc_eh.a. Ours not to reason why….

Linux API Headers

Now this part took me a short while to figure out: to compile the Linux API headers I had to extract the entire Linux kernel source, but only compile the headers with some special commands.

The Linux API headers expose the Linux API to Glibc when I build Glibc; that way, Glibc knows what the kernel can and can’t do, and it compiles accordingly. The headers are installed to /tools/include (/mnt/lfs/tools/include for in the temporary toolchain)

It actually took longer to extract the sources than to build this one: Pressie shattered through the predicted build time of 5 minutes and build in only 3 minutes.

Glibc (not EGlibc)

Glibc is the main C library. Basically, it is a standardized collection of basic functionality for the C language (like memory allocation, file reading, etc.).

I needed to apply a patch to Glibc that fixes a bug preventing it from compiling with the version of GCC I built earlier. I made Glibc a dedicated build directory, as recommended, and then ran a command to compile it as compatible for i486 (i386 is no longer supported by Glibc). I’m not sure why Glibc’s authors didn’t make it i486 by default, but I’m sure it made sense at the time.

Glibc is also cross-compiled, using the first pass builds of Binutils and GCC to configure itself according to the capabilities of Pressie’s hardware. After adjusting the toolchain settings, I will compile Binutils and GCC against the new and Pressie-specific Glibc, so they will be free of any polluting influences of the live CD environment. Glibc is installed to /tools (/mnt/lfs/tools in the temporary toolchain) as well.

A predicted 5 hours and 38 minutes was really 6 hours and 7 minutes.

Adjusting the Toolchain

With the temporary C libraries in place, it is time to adjust things to point toward these new libraries instead of the libraries provided by the live CD environment. I ran a sed script that removed all references in GCC’s specs file to /lib (libraries on the live CD) and pointed them instead to /tools (temporary toolchain libraries on the hard drive).

Finally, I ran a sanity check like the book recommended, and everything worked perfectly! Next I will compile the second passes of Binutils and GCC, together with twenty-odd smaller packages that I’ll need for the rest of the temporary toolchain.

LFS 6.8 (Part 6): Temporary Toolchain Overview

The LFS temporary toolchain is going to take a while to cover completely, so I’m breaking it up into several parts for readability.

The first part of actually building LFS is building a temporary toolchain to prevent contamination from the host environment (in my case, the LFS live CD).

I will build Binutils and GCC twice during the beginning of this process: first to tune them for the hardware I’m using; and second to allow them to refer only to themselves, and not to the liveCD versions. This was the trickiest part of the build for me to understand, but it made sense the more I thought about it. Next, I will build a minimal set of tools that will let me build (and test) my final LFS system.

At several points in this process, I must adjust the temporary toolchain to be more self-referential, until finally I chroot into the final build environment and discard the temporary toolchain.

Before I actually begin building the temporary toolchain, I am going to explain my process for actually building the toolchain, and most of the rest of LFS, so it can be assumed in later entries for brevity’s sake:

  • Since my sources are in /sources (/mnt/lfs/sources during the temporary toolchain build), I will be there to begin with each program build.
  • I will then use tar -zxf on .tar.gz sources and tar -jxf on .tar.bz2 sources to decompress them into their own directory (usually <package-name>-<version>).
  • I will the cd into the newly extracted directories and follow the instructions as given in the book.
  • I will, where possible, use a time { ./configure && make && make install } template so I can give an accurate account of the length of each build (this is slightly deviant from the book, other than the first time I build Binutils for the temporary toolchain).
  • I will then cd back into /sources (again, /mnt/lfs/sources during the temporary toolchain build) and rm -rf the extracted source files and any dedicated build directories. This keeps down the amount of hard disk space needed, especially for a very constrained system like Pressie, and it will ensure a completely clean build for software that I have to build multiple times.

And with these preliminaries, I am ready to build the temporary toolchain.

LFS 6.8 (Part 5): Version Checks and Final Preparations

Continuing my LFS adventures, it is time to check that the base system (in this case, the LFS 6.3-r2160 live CD) has the necessary tools for successfully compiling LFS.

There is a handy version check bash script on page xviii of the LFS book that I ran. I typed it in by hand, but it was satisfying work because I learned some interesting text processing tidbits in the process. It turns out that everything on the live CD is the exact version that LFS 6.8 requires, except that grep is version 2.5.1, but the book lists the minimum required version as 2.5.1a — hopefully everything will work anyway.

Next, I set up the $LFS variable in bash. The $LFS variable is important because it is used throughout the book (trust me, I’ve looked) while building and installing LFS. I haven’t deviated from the book yet, so I was able to type in all the commands verbatim.

Then I created the $LFS/tools directory, which I will use while I build the temporary toolchain. Then I created a /tools symlink on the live CD / (root directory) — this means the toolchain will install its files to /mnt/lfs/tools (aka $LFS/tools), even though thanks to the symlink, they think they installed them in /tools. That way, when the first toolchain is completed and ready to build the second (clean) toolchain, and I have to chroot into $LFS, the toolchain can still look at /tools and find everything it needs.

Next, I followed the instructions for creating a new user — lfs — and setting up a known-good bash environment. I then used the source command to enter into my newly created bash environment: spartan, but effective.

Finally, the LFS book addresses SBUs (Standard Build Units) and test suites before beginning the first toolchain. SBUs are a rough measure of how long a particular package will take to compile. Skipping ahead some, my SBU is 49 minutes and 5 seconds, a far cry from the 3-odd minutes of the “fastest” systems the LFS book speaks of. The book also recommends skipping the test suites that are usually run on the first toolchain, since they offer near-zero benefit at this point.

I am now ready to build the first LFS toolchain. Onward, ho!

LFS 6.8 (Part 4): Partitioning

After getting Pressie ready for building LFS, my next step was to fire her up with the LFS live CD and partition her hard drives.

The LFS live CD required me to choose some basic settings like locale, system time, and console settings as it booted up. Though I still have a one-hour-off clock, the other (default) live CD settings seem to be working fine.

First off, I used cfdisk (a very simple partitioning tool with an ncurses interface) to partition my disks. Soon I’d set up my disks how I wanted them (after I realized that IDE hard drives are at /dev/hdn, not like modern SATA hard drives at /dev/sdn):

  • One large / (root) partition on the main 8.4GB hard disk
  • Two partitions on the secondary 2.1GB hard disk
    • a 1GB /home partition
    • a 1.1GB swap partition (to try to offset the tiny amount of RAM

I was going to have a separate /boot partition as recommended in the LFS book, but decided against it for simplicity’s sake — I doubt I’ll use multiple Linux distros on Pressie at the same time.

Next, I used mke2fs to format / and /home as ext3, and mkswap to make a swap partition. After that, I mounted the new ext3 and swap partitions — success!

I plugged in a USB thumbdrive loaded with the LFS sources and patches, then copied them to the main 8.4GB drive. I unplugged my thumbdrive and plugged in a mouse (just in case).

So far so good!

This Is Your Brain On LFS

LFS Logo

LFS! We meet for the first time for the last time!

I’ve been spending the last few days getting some presearch (you know, pre-research) done for installing Linux from Scratch, and I’m slowly forming a rough mental checklist of what I need to do to install LFS. I’m a total newbie at LFS, and even Linux, so my list may be much longer than it would be for more experienced Linux users, but hey — we all start somewhere, right?

Get My Hardware Ready

I’ve found an old unused hard drive at work, a beautiful 2.1GB Seagate ST32122A (4500RPM, 12msec seek time, even a 128K cache!), which will make building LFS on Limited Edition much easier, since I don’t have to worry so much about toasting GRUB accidentally. Or toasting Limited Edition accidentally 😯 . I have no idea if it actually works, but I’m hoping so — I like putting old things to use in creative ways. If it fails, I guess I’ll either have to build LFS in Piggybacker or get a similarly old disk on eBay.

Do The Pre-Reading

(The preading?) There is a lot of pre-reading for me to do! I had no idea: this is a short list of the recommended reading and resources to scour before building LFS 6.7 (dates in brackets are of last known update):

I put together this list because there’s not really any single page that links to good resources for building a single-boot Intel x86 LFS — but now there is! Let me know if I missed something or if there’s newer versions available, and I’ll add it here.

Here’s some further links for the adventurous multi-booters-to-be:

And here’s some links to some major hosts of free software source code:

By no means are any of these meant to be exhaustive — they are basically a jumpstart for my fellow LFS newbies. If I find more interesting or valuable material, I’ll post about it on BG, too, so keep an eye out in coming weeks.

Practice Building Packages

It’s not all about reading: no, LFS is all about getting your hands dirty, too, so I’ll be building software from source many times before I actually move on to LFS. Emacs is recommended, as it’s both useful and well-traversed software. I’ll keep an eye out in coming weeks for other good packages to practice building, too.

Choose An LFS Build Distro

The Linux Mint 9 KDE / Kubuntu 10.10 hybrid that Limited Edition is running isn’t up to the task of building LFS. I can install all of the required packages, but many of them are too recent for LFS 6.7 to have adjusted, and beyond that, I’d rather not mess up my current install any further 😛 . So rather than downgrading my packages and crossing my fingers, I’ll just use a live CD. I may use a normal distro, but LFS highly recommends using the LFS LiveCD to build a LFS system from, and I’m inclined to do just that.

Read The LFS Book — Repeatedly!

Then once I’ve done all that prepwork, I will inhale the LFS Book until I know it backwards and frontwards and sideways.

Take A Deep Breath… And Begin

And then I’ll start building LFS 🙂 . From my cursory searches, it seems that getting a working LFS build the first time around takes about a week, but it is possible to have a ready LFS system in a few hours time, if the RNG is on your side.

The reason I’ve slated so much prepwork is twofold: First, I want to take my time with this, because my goal is not just to rush to have a working LFS build, but also to understand as much as I can about Linux underpinnings in general; and second, I want the actual build process to be as painless as possible when the time actually comes. Besides, documenting the process is what this blog is all about!