Picking Up Where I Left Off

It is daunting to think of writing about all the things I’ve done since September 2011, so I won’t be writing about all of them (or possibly any of them). However, I will jump back into some of the same topics I was writing about 18 months ago, especially my LFS series.

I will also work on freshening the site in small increments, including revising old posts. I have mixed feelings about revising my older content, but I can still have my old content in WordPress revision control while making new content (or at least polishing the old).

An Inveterate Perfectionist

My natural tendency is to tweak my writing (and everything else) to within an inch of its life to squeeze out the maximum potential. However, since I am still a novice writer, my writing is frankly sub-par (at least by my own standards). I started this blog because I wanted to have the freedom to make some mistakes in order to actually write things. However, because the blog format is, shall we say, not perfectly suited to how I want to express myself, I often feel constrained to write poorly.

As you may have noticed, I took a six-month hiatus to focus on my personal life. I intended the LFS series to get me back into writing again, but the *ahem* intense tedium of describing in detail the building of a Linux system, while valuable to me, is probably minimally interesting to anybody who isn’t me.

In fact, it’s even pretty boring to me. This is a problem, because this is one of the reasons I stopped writing about learning Python — I got bored with telling my own story because I kept obsessing over how to talk about technical details I’d already learned to the point where I simply wasn’t learning anymore.

I’ve been trying to figure out how to assuage both my creative and my technical halves. I don’t have definitive answers yet, but I do know that a few things will be different:

  • I am no longer going to try to make long and meaningful posts — just meaningful ones.
  • I am no longer going to worry about how often I post, but I am probably going to post more often.
  • I am going to diversify my writing into my other interests so I always have something to write about (curse you, writer’s block!).

I predict that BG will undergo many changes, and probably more than a few things will break in the process. But then, that’s how I learn — by fixing the things I just broke. ;)

LFS 6.8 (Part 12): Preparing to Build the LFS System

Just like I broke up building the LFS toolchain into several parts, I will break up building the LFS system.

Right now I have my temporary toolchain, but to start building the LFS system, I need to make a few changes to get it ready. If I break something major, I will have to start from here with the known-good backup of my temporary toolchain (which I just made). As it is, I’m ecstatic that I’ve made it this far with no troubles.

Preliminary Speak

The LFS book recommends that first-time LFS builders (like me!) should build without optimizations, since those speed gains can come with problems in compilation and bugs in the “optimized” software. Will I go back and try optimization later? Possibly, but not very likely unless I’m testing on a faster system (it took me 30 hours to get this far, so I’m feeling conservative).

The book also emphasizes that I should build all the packages in order, so “no program accidentally acquires a path referring to /tools hard-wired into it.”

Virtual Kernel File Systems

The kernel uses several virtual (RAM-hosted) filesystems, but they are mounted as though they are actually on disk; therefore, I need to create their host directories.

The kernel also requires a few virtual device nodes (those things I mostly don’t understand that crowd /dev) in order to boot at all, so I need to create a few manually.

Most of /dev, though, isn’t populated for the LFS system that I will chroot into. Since /dev is normally filled during boot, and the LFS chroot has never booted, I have no valid system information yet, and if I tried to build anything, the software would be baffled by the complete lack of hardware. I got around this by using a bind mount (a mount that mirrors an existing mount point or directory) of my build environment’s (the LFS live CD) /dev. This even allows me to add/remove hardware on the fly, since the changes in /dev will be mirrored to the chrooted LFS build environment.

Finally, I mount a few more virtual kernel file systems that are empty, but that the kernel will need later, including /dev/pts, /dev/shm, /proc, and /sys.

Package Management Speak

LFS doesn’t include any package management scheme, as the goal of LFS is to teach about Linux’s inner workings. The LFS book does discuss various solutions for package management, along with their strengths and weaknesses, but leaves their implementation up to the user.

Almost as a side note, the LFS book mentions that LFS can be handy for use on multiple computers with similar hardware, and can be deployed with only a little modification. I don’t really know why anyone would do this in the real world; I’d rather use a Linux system with package management of some sort.

Entering the Chroot Environment

So with the temporary toolchain ready and the virtual kernel file systems mounted and mirrored, I am ready to chroot into the $LFS directory and start building the final LFS system!

The actual chroot command is pretty long, but is fairly self-explanatory. The new chroot environment is quite similar to the live CD environment, except that it is tuned to make building with the temporary toolchain more painless (so the system uses the newly built software right away automatically).

Creating Directories

At this point, $LFS is pretty sparse, consisting only of the /dev, /source, and /tools directories (if memory serves). I needed a full directory tree, so I created one with the dozen-odd commands in the LFS book.

Ah, that’s better.

Creating Essential Files and Symlinks

A few final tweaks configure the LFS chroot environment to be ready to build! After adding a few symlinks, directories, and files, I started a new shell (thus fixing the I have no name! in the prompt) and created some log files.

Time to build LFS!

LFS 6.8 (Part 11): Tweaking and Backing Up the Temporary Toolchain

The LFS temporary toolchain is going to take a while to cover completely, so I’m breaking it up into several parts for readability.

There are a couple final things to do to the temporary toolchain that will get it into a form suitable for backup. Backups will be really nice to have if things go south later on, since I won’t have to rebuild the entire toolchain again.

Stripping

Stripping the temporary toolchain of debugging symbols is optional, but since my system is pretty space-constrained, I went ahead and did it. Debugging symbols are “unnecessary” according to the LFS book (presumably unnecessary for now because of the “temporary” part of “temporary toolchain”) My LFS partition was at 496,584kB (~485MB) before stripping, and was brought down to 339,992kB (~332MB) — a savings of more than twice the 70MB that the LFS book led me to expect!

Also, removing documentation can save more space, so I removed the temporary toolchain’s documentation for further space. Removing documentation brought my 339,992kB (~332MB) LFS partition down to 314,008k (~307MB). Not bad at all!

These steps leave me with nearly 7GB free on my LFS partition, which gives me plenty of room for building the rest of LFS, and possibly even BLFS.

Changing Ownership

Since I will be chrooting into a clean environment where there is no lfs user, the LFS book recommends changing ownership of all the files I’ve built thus far to root. This both enhances security (since no one can abuse those files who creates a user with the same ID as the now non-existent lfs user) and makes things nice and consistent for the next stages in building LFS.

Backing Up

After stripping and changing ownership of the files in the temporary toolchain to root, the LFS /tools partition is ready for backing up. As the LFS book says, “subsequent commands in chapter 6 will alter the tools currently in place, rendering them useless for future builds.” I took the hint.

The LFS book leaves the backup method to the ingenuity of the user. I used a tar command to compress the /tools directory into a .bz2 file and copy it to my future /home directory (and after that, I copied it to my USB drive).

All Done!

Well, the temporary toolchain is done; the rest is yet to come. Everything thus far should have built in 24 hours 28 minutes (i.e., just over a day), but actually took 30 hours 28 minutes. All those extra test suites did their damage.

[Note: Imagine at this point how glad I am to have a backup.]

I am now ready to build the LFS system!

LFS 6.8 (Part 10): The Second Part of the Rest of the Temporary Toolchain

The LFS temporary toolchain is going to take a while to cover completely, so I’m breaking it up into several parts for readability.

I’m about halfway through building the LFS temporary toolchain, and encountering mostly smooth sailing.

Grep

According to Wikipedia, “The name [Grep] comes from the ed command g/re/p (global / regular expression / print).” What Grep does is search files for the patterns you specify.

Grep installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Grep was supposed to build in 5 minutes, but with the test suites took a total of 19 minutes.

Gzip

Gzip is an older and only fairly efficient compression tool, but it works well enough that it is still a near-universal compression format.

Gzip installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

I calculated Gzip to build in less than 5 minutes, but took 8 minutes. Those test suites! Well, at least I’m getting practice in what to look for.

M4

M4 is (as the LFS book puts it) a macro processor typically used to pre-process source code for a compiler. What does it do that’s so different from any other scripting/macro language? I don’t know, but it probably is very well suited to pre-processing source code for compilers, since that’s what its used for :P .

M4 installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

M4 should have taken 10 minutes to build, but test suites inflated that number to 29 minutes.

Make

Ah, Make is our friend! Make automates the compiling of programs by reading that Makefile you keep seeing in the source directory and compiling the program accordingly. I can’t imagine what it would be like doing all of the steps by hand. The LFS book would be huge!

Anyway, Make is installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

For all its versatility, Make was only supposed to need 5 minutes to build, but with its test suite, 7 minutes.

Patch

Patch modifies source code by applying the changes specified in a patch file. This makes it easy to apply small fixes with accuracy without having to wait for the developers to update the whole program.

Patch is installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Patch was projected for a less-than 5 minute build, and finished in a satisfying 4 minutes. Including the test suite!

The test suite threw up two errors, though. One error seemed to be dependent on the ed text editor, and the LFS live CD apparently didn’t have it; however, it shouldn’t have run anyway, so no biggie. The other error I couldn’t even track down, and seems to be a failure of cat or something. Since I couldn’t find a way to fix it (or even figure out what it was), I decided to forge ahead regardless.

Perl

Perl is a general-purpose scripting language, often represented with a camel mascot. There is much that can be said about Perl, but you have Google for that :) .

First I had to apply a patch “to adapt some hard-wired paths to the C library” and run Perl’s configure script to build a minimal Perl for testing purposes (which is all it will be used for in the temporary toolchain). When I installed Perl, I only needed the utilities and one library, so it built in a lot less time than it will later. Perl installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

I did skip the test suite, since the LFS book informed me that all of the rest of Perl would be built as well for the test; because Perl was only partially built, “installing” the temporary toolchain Perl was really just copying directories. Again, because this is a minimal version of Perl for the toolchain, it should only take 39 minutes to build, but it took 2 hours 9 minutes. I have no idea why so slow; I guess Pressie is getting tired.

Sed

Sed is a commonly used *nix tool for editing text, and is especially handy for shell scripts.

Installing Sed was straightforward, and Sed was installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

5 minutes is all Sed should have taken to install, but the optional test suite inflated that to a full 7 minutes.

Tar

Tar stands for “tape archiver”, so you can guess how far back its roots go. Tar made backup to tapes easy by concatenating files into a single .tar file. This made backups to tape much easier, as the tape didn’t have to rewind and fast forward back and forth between a single file and the master file table.

Anyway, Tar was installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Tar should have taken 15 minutes to build, but with the optional test suite, Tar took a full 57 minutes to build (all okay, though).

Texinfo

Texinfo is a suite of utilities for manipulating info pages (a longer, more detailed version of the man pages).

Texinfo was installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Simple to build, Texinfo was scheduled for a 10 minute build, but took 12 minutes with the optional test suite.

Xz

Xz is an up-and-coming compression format that hopes to mostly replace less efficient formats like Gzip and Bzip2. It is great for source code, as Xz compresses text even better than either Gzip or Bzip2.

Xz installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

15 minutes expected, 17 minutes taken (test suite included).

Now What?

All the packages for the temporary toolchain are now installed to /tools (in the temporary toolchain, /mnt/lfs/tools), and I am ready for the final steps in preparing the temporary toolchain for action!

LFS 6.8 (Part 9): The First Part of the Rest of the Temporary Toolchain

The LFS temporary toolchain is going to take a while to cover completely, so I’m breaking it up into several parts for readability.

After building the compiler and C libraries, I need to build numerous small, but not insignificant, utilities for the temporary toolchain.

Tcl

Tcl is the Tool Command Language, a flexible scripting language that is commonly used on *nix systems.

Tcl, Expect, and DejaGNU are used in the test suites for the versions of Binutils and GCC I will build for the final LFS system. According to LFS, “[i]nstalling three packages for testing purposes may seem excessive, but it is very reassuring, if not essential, to know that the most important tools are working properly.” After the test suites, though, these programs are unnecessary, so I won’t be building them again for the final LFS system. To build Tcl, I first cd‘ed into the unix/ subdirectory of the Tcl source and ran configure and make there. I also ran the optional test suite, just because. Tcl installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Tcl was supposed to take about 25 minutes to build, but with the test suites it took a total of 38 minutes to build.

I made the Tcl libraries writable so I can strip debugging symbols at the end of the temporary toolchain build process (to recover extra space, and presumably speed things up a bit). Next, I installed Tcl’s private headers, since Expect needs them, and made a symlink to tclsh.

Expect

Expect is a popular extension to Tcl that makes it easy to script interactions with *nix programs (in fact, Expect is one of the main reasons people use Tcl in the first place).

Before compiling Expect, I ran a script to make it use the known-good /bin/stty; this prevents Expect from trying to use a (non-existent) /usr/local/bin/stty that can be present in the live CD build environment. I configured Expect to look in /tools/lib (in the temporary toolchain, /mnt/lfs/tools/lib) for the freshly compiled Tcl libraries, and to build without the included Expect scripts, since they are unneeded. I also ran the optional test suite, which experienced eleven failures, but the LFS book says not to worry about Expect test suite failures, since they apparently fail at random with no consequences (“under certain host conditions that are not within our control” to be exact). Expect is installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

I expected Expect to build in 5 minutes, but instead Pressie expectorated Expect in 6 minutes.

DejaGNU

DejaGNU is a program testing framework written with Expect.

LFS uses DejaGNU 1.4.4, which was released in 2004. Naturally, GNU released DejaGNU 1.5 six days after the LFS 6.8 book was released :P .

However, I’m using version 1.4.4, so I applied a patch containing several fixes to the old version before compiling. DejaGNU is built and installed in a single step, so had to run the optional test suite after the installation rather than before. DejaGNU installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

DejaGNU was supposed to build in less than 5 minutes, and Pressie delivered a shining build time of 2 minutes.

Ncurses

Ncurses is a handy set of libraries that lets programs define a text interface without having to rewrite the interface for every different terminal program. Think of it as a text counterpart to GTK+ or Qt.

Ncurses was fairly easy to compile, with a few options enabled that ensure it will be usable once I chroot into the final LFS system. I did not run the test suites for Ncurses since they can only be run once Ncurses is installed, and the LFS book did not provide explicit instructions for not screwing things up. Ncurses is installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Estimated build time was 34 minutes, but Ncurses built in a brisk 24 minutes. Way to go, Pressie!

Bash

Bash is the Bourne-Again SHell, a Swiss Army tool for interacting with your computer in a super-efficient way on the command line. There are seriously way too many features and capabilities in Bash for me to do them justice: just know that there are a lot.

Bash has a built-in malloc (memory allocation) function, but it apparently breaks a lot, so I compiled Bash to use Glibc’s more stable malloc. Temporary toolchain Bash is installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Bash was supposed to build in 25 minutes, and it only took 30 minutes.

Finally, I created a sh symlink to bash. Similar to the cc symlink for gcc, this allows for scripts to be generic, and for system administrators to use a program of their choice.

Bzip2

Bzip2 is a compression/decompression utility that is newer and better than Gzip, but older and not quite as good as Xz. It is one of the most commonly used compression methods for distributing source code (though Gzip is still pretty popular).

Bzip2 is installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

I expected Bzip2 to compile in less than 5 minutes, and Pressie breezed through in only 3 minutes.

Coreutils

Coreutils is a giant wad of basic system utilities. I think the only reason they’re all together is that it was easier than having dozens of individual packages that would all have to be installed anyway.

I needed to tweak Coreutils to build the hostname program, which the Perl test suite will need later on. Coreutils installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Coreutils was supposed to build in 34 minutes, but took 1 hour 35 minutes. (I ran the optional test suite, which is probably why Pressie took so long. No errors yet!)

One of the tests failed in the (optional) Coreutils test suite, but I didn’t record which one. I hope this doesn’t come back to bite me….

Lastly, I manually installed this temporary toolchain version of su as su-tools according the the LFS book instructions. su is not truly installable yet, as I’m building and installing the temporary toolchain as the user lfs, and only root can install a su that can setuid root. The manually installed su-tools will be used in tests in the final LFS system, and I’ll be able to keep a fully functional su accessible in the LFS live CD build environment.

Diffutils

Diffutils is a collection of tools for comparing files and directories.

Diffutils installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Diffutils was supposed to build in less than 5 minutes, but took 18 minutes. (Again, running the optional test suite likely inflated the numbers.)

File

File is a very old, very solid *nix program for determining filetypes and reading files. According to its authors, File is used in every known BSD and every known Linux distribution — which is pretty cool!

File is also installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Since File is a fairly small program, it was projected to compile in only 10 minutes, though it actually took 7 minutes, even with the test suite!

Findutils

Findutils has several GNU programs to find files, as well as to generate and maintain a file database (to find files faster).

Findutils is installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Findutils was supposed to compile in 15 minutes, though it actually took 16 minutes with the test suite.

Gawk

Gawk is the GNU implementation of the AWK language. AWK lets you script the manipulation of text files.

Gawk is installed to /tools (in the temporary toolchain, /mnt/lfs/tools).

Gawk was supposed to take 10 minutes to build, and actually took 10 minutes. Hooray! Even the test suite didn’t slow Pressie down!

More to come!

So far, no major problems have stopped my progress in building LFS. Let’s hope my luck holds!

LFS 6.8 (Part 8): Temporary Toolchain Second Pass and Sanity Check

The LFS temporary toolchain is going to take a while to cover completely, so I’m breaking it up into several parts for readability.

Binutils (Pass 2)

To review:

Binutils contains some of the closest “to the metal” utilities that Linux needs, providing low-level translation of programmer intent into machine-readable binary code. Binutils gets built first as GCC and Glibc will need these lower-level utilities in order to function properly themselves.

The first pass of Binutils was a cross-compile, and this second pass uses a few extra options to use only the first pass (cross-compiled) Binutil’s tools and libraries. This removes any traces of library or settings contamination from the build environment (I’m using the LFS live CD). The second pass of Binutils is installed to the /tools directory (in the temporary toolchain, /mnt/lfs/tools), overwriting the first pass version of Binutils.

I estimated Pass #2 of Binutils to take 1 hour and 4 minutes to build, and Pressie breezed through in only 58 minutes.

Not quite everything is right yet: the library linker (ld) I just compiled would incorrectly look in /tools for libraries to build against. To remove the contamination of the temporary toolchain itself, I cleaned out the ld subdirectory and recompiled ld to point to /usr/lib and /lib (which is where libraries will be on my final LFS system). Then I copied the new ld to /tools/bin (in the temporary toolchain, /mnt/lfs/tools/bin).

GCC (Pass 2)

To review again:

GCC is the GNU Compiler Collection, and it is the main utility on a Linux box for compiling programs written in C and C++ (and a host of others that are irrelevant to LFS for now).

The first pass of GCC was a cross-compile, and the second pass of GCC will be an uncontaminated build. GCC required a patch and several tweaks to prepare it for building the final LFS system, mostly by changing various build scripts to use the /tools Glibc libraries rather than the live CD Glibc libraries. These tweaks took quite a bit of typing, so be sure to check several times for any typos before hitting that last Enter. Since Pressie is based on a i586 CPU, I skipped a step that is intended to prevent GCC from linking to x86_64 libraries on a 64-bit host environment. Next, I extracted GMP, MPFR, and MPC into GCC’s source directory for the build process. I needed both the C and C++ compilers to be built, so there are some extra C++-specific configure options that were unnecessary in the first pass. The second pass of GCC installed to /tools (/mnt/lfs/tools in the temporary toolchain), overwriting the first pass version.

I expected GCC to build in 7 hours and 21 minutes, but it took 8 hours and 12 minutes.

Finally, since many programs link to cc (a generic C compiler) instead of to gcc specifically, I created a cc symlink to gcc.

Sanity Check!

LFS recommends doing a basic sanity check on the temporary toolchain before going further. Essentially, I compiled an empty C program and made sure I was using the correct program linker (the new one in /tools, and not the one from the live CD environment), then deleted the test output. At this point, everything is A-OK to-the-letter perfect. Now I’m ready to build the rest of the utilities in the temporary toolchain!

LFS 6.8 (Part 7): Temporary Toolchain First Pass and Adjustments

The LFS temporary toolchain is going to take a while to cover completely, so I’m breaking it up into several parts for readability.

Binutils (Pass 1)

Binutils contains some of the closest “to the metal” utilities that Linux needs, providing low-level translation of programmer intent into machine-readable binary code. Binutils gets built first as GCC and Glibc will need these lower-level utilities in order to function properly themselves.

This first pass of Binutils is a cross-compile, making it refer specifically to the hardware it is being built on, rather than to the hardware the current build environment (I’m using the LFS live CD) was built for. Binutils is installed to the /tools directory (in the temporary toolchain, /mnt/lfs/tools) because right now / (root) refers to the live CD environment.

I put the configure and make commands for Binutils into a time{} container to measure an SBU (Standard Build Unit) so I could then reliably guess build times for other packages. It turns out an SBU on Pressie is 49 minutes and 8 seconds.

GCC (Pass 1)

GCC is the GNU Compiler Collection, and it is the main utility on a Linux box for compiling programs written in C and C++ (and a host of others that are irrelevant to LFS for now).

The first pass of GCC is also a cross-compile. It requires GMP, MPFR, and MPC, so I extracted their sources into GCC’s source directory so they could be used during the build process. GCC is installed to /tools (/mnt/lfs/tools in the temporary toolchain). Only the C compiler is required for now, so that’s all I built.

I expected GCC to build in 4 hours and 5 minutes, but in reality it took 4 hours and 44 minutes.

As a final touch, I fake out the soon-to-be-built Glibc by symlinking libgcc.a to libgcc_eh.a, since apparently libgcc.a has all the things Glibc wants from libgcc_eh.a. Ours not to reason why….

Linux API Headers

Now this part took me a short while to figure out: to compile the Linux API headers I had to extract the entire Linux kernel source, but only compile the headers with some special commands.

The Linux API headers expose the Linux API to Glibc when I build Glibc; that way, Glibc knows what the kernel can and can’t do, and it compiles accordingly. The headers are installed to /tools/include (/mnt/lfs/tools/include for in the temporary toolchain)

It actually took longer to extract the sources than to build this one: Pressie shattered through the predicted build time of 5 minutes and build in only 3 minutes.

Glibc (not EGlibc)

Glibc is the main C library. Basically, it is a standardized collection of basic functionality for the C language (like memory allocation, file reading, etc.).

I needed to apply a patch to Glibc that fixes a bug preventing it from compiling with the version of GCC I built earlier. I made Glibc a dedicated build directory, as recommended, and then ran a command to compile it as compatible for i486 (i386 is no longer supported by Glibc). I’m not sure why Glibc’s authors didn’t make it i486 by default, but I’m sure it made sense at the time.

Glibc is also cross-compiled, using the first pass builds of Binutils and GCC to configure itself according to the capabilities of Pressie’s hardware. After adjusting the toolchain settings, I will compile Binutils and GCC against the new and Pressie-specific Glibc, so they will be free of any polluting influences of the live CD environment. Glibc is installed to /tools (/mnt/lfs/tools in the temporary toolchain) as well.

A predicted 5 hours and 38 minutes was really 6 hours and 7 minutes.

Adjusting the Toolchain

With the temporary C libraries in place, it is time to adjust things to point toward these new libraries instead of the libraries provided by the live CD environment. I ran a sed script that removed all references in GCC’s specs file to /lib (libraries on the live CD) and pointed them instead to /tools (temporary toolchain libraries on the hard drive).

Finally, I ran a sanity check like the book recommended, and everything worked perfectly! Next I will compile the second passes of Binutils and GCC, together with twenty-odd smaller packages that I’ll need for the rest of the temporary toolchain.

LFS 6.8 (Part 6): Temporary Toolchain Overview

The LFS temporary toolchain is going to take a while to cover completely, so I’m breaking it up into several parts for readability.

The first part of actually building LFS is building a temporary toolchain to prevent contamination from the host environment (in my case, the LFS live CD).

I will build Binutils and GCC twice during the beginning of this process: first to tune them for the hardware I’m using; and second to allow them to refer only to themselves, and not to the liveCD versions. This was the trickiest part of the build for me to understand, but it made sense the more I thought about it. Next, I will build a minimal set of tools that will let me build (and test) my final LFS system.

At several points in this process, I must adjust the temporary toolchain to be more self-referential, until finally I chroot into the final build environment and discard the temporary toolchain.

Before I actually begin building the temporary toolchain, I am going to explain my process for actually building the toolchain, and most of the rest of LFS, so it can be assumed in later entries for brevity’s sake:

  • Since my sources are in /sources (/mnt/lfs/sources during the temporary toolchain build), I will be there to begin with each program build.
  • I will then use tar -zxf on .tar.gz sources and tar -jxf on .tar.bz2 sources to decompress them into their own directory (usually <package-name>-<version>).
  • I will the cd into the newly extracted directories and follow the instructions as given in the book.
  • I will, where possible, use a time { ./configure && make && make install } template so I can give an accurate account of the length of each build (this is slightly deviant from the book, other than the first time I build Binutils for the temporary toolchain).
  • I will then cd back into /sources (again, /mnt/lfs/sources during the temporary toolchain build) and rm -rf the extracted source files and any dedicated build directories. This keeps down the amount of hard disk space needed, especially for a very constrained system like Pressie, and it will ensure a completely clean build for software that I have to build multiple times.

And with these preliminaries, I am ready to build the temporary toolchain.

LFS 6.8 (Part 5): Version Checks and Final Preparations

Continuing my LFS adventures, it is time to check that the base system (in this case, the LFS 6.3-r2160 live CD) has the necessary tools for successfully compiling LFS.

There is a handy version check bash script on page xviii of the LFS book that I ran. I typed it in by hand, but it was satisfying work because I learned some interesting text processing tidbits in the process. It turns out that everything on the live CD is the exact version that LFS 6.8 requires, except that grep is version 2.5.1, but the book lists the minimum required version as 2.5.1a — hopefully everything will work anyway.

Next, I set up the $LFS variable in bash. The $LFS variable is important because it is used throughout the book (trust me, I’ve looked) while building and installing LFS. I haven’t deviated from the book yet, so I was able to type in all the commands verbatim.

Then I created the $LFS/tools directory, which I will use while I build the temporary toolchain. Then I created a /tools symlink on the live CD / (root directory) — this means the toolchain will install its files to /mnt/lfs/tools (aka $LFS/tools), even though thanks to the symlink, they think they installed them in /tools. That way, when the first toolchain is completed and ready to build the second (clean) toolchain, and I have to chroot into $LFS, the toolchain can still look at /tools and find everything it needs.

Next, I followed the instructions for creating a new user — lfs — and setting up a known-good bash environment. I then used the source command to enter into my newly created bash environment: spartan, but effective.

Finally, the LFS book addresses SBUs (Standard Build Units) and test suites before beginning the first toolchain. SBUs are a rough measure of how long a particular package will take to compile. Skipping ahead some, my SBU is 49 minutes and 5 seconds, a far cry from the 3-odd minutes of the “fastest” systems the LFS book speaks of. The book also recommends skipping the test suites that are usually run on the first toolchain, since they offer near-zero benefit at this point.

I am now ready to build the first LFS toolchain. Onward, ho!