In this chapter, we will perform a few additional tasks to prepare for building the temporary system. We will create a set of directories in $LFS (in which we will install the temporary tools), add an unprivileged user, and create an appropriate build environment for that user. We will also explain the units of time (“SBUs”) we use to measure how long it takes to build LFS packages, and provide some information about package test suites.
4.2. Creating a Limited Directory Layout in the LFS Filesystem
In this section, we begin populating the LFS filesystem with the pieces that will constitute the final Linux system.
The first step is to create a limited directory hierarchy, so that the programs compiled in Chapter 6 (as well as glibc and libstdc++ in Chapter 5) can be installed in their final location. We do this so those temporary programs will be overwritten when the final versions are built in Chapter 8.
Create the required directory layout by issuing the following commands as root :
mkdir -pv $LFS/{etc,var} $LFS/usr/{bin,lib,sbin}
for i in bin lib sbin; do
ln -sv usr/$i $LFS/$i
done
case $(uname -m) in
x86_64) mkdir -pv $LFS/lib64 ;;
esac
Programs in Chapter 6 will be compiled with a cross-compiler (more details can be found in section Toolchain Technical Notes). This cross-compiler will be installed in a special directory, to separate it from the other programs. Still acting as root , create that directory with this command:
mkdir -pv $LFS/tools
Note
The LFS editors have deliberately decided not to use a /usr/lib64 directory. Several steps are taken to be sure the toolchain will not use it. If for any reason this directory appears (either because you made an error in following the instructions, or because you installed a binary package that created it after finishing LFS), it may break your system. You should always be sure this directory does not exist.
4.3. Adding the LFS User
When logged in as user root , making a single mistake can damage or destroy a system. Therefore, the packages in the next two chapters are built as an unprivileged user. You could use your own user name, but to make it easier to set up a clean working environment, we will create a new user called lfs as a member of a new group (also named lfs ) and run commands as lfs during the installation process. As root , issue the following commands to add the new user:
This is what the command line options mean:-s /bin/bash This makes bash the default shell for user lfs .
-g lfs
This option adds user lfs to group lfs .
-m
This creates a home directory for lfs .
-k /dev/null
This parameter prevents possible copying of files from a skeleton directory (the default is /etc/skel ) by changing the input location to the special null device.
lfs
This is the name of the new user.
If you want to log in as lfs or switch to lfs from a non-root user (as opposed to switching to user lfs when logged in as root , which does not require the lfs user to have a password), you need to set a password for lfs . Issue the following command as the root user to set the password:
passwd lfs
Grant lfs full access to all the directories under $LFS by making lfs the owner:
chown -v lfs $LFS/{usr{,/*},var,etc,tools}
case $(uname -m) in
x86_64) chown -v lfs $LFS/lib64 ;;
esac
Note
In some host systems, the following su command does not complete properly and suspends the login for the lfs user to the background. If the prompt "lfs:~$" does not appear immediately, entering the fg command will fix the issue.
Next, start a shell running as user lfs . This can be done by logging in as lfs on a virtual console, or with the following substitute/switch user command:
su - lfs
The “ - ” instructs su to start a login shell as opposed to a non-login shell. The difference between these two types of shells is described in detail in bash(1) and info bash.
4.4. Setting Up the Environment
Set up a good working environment by creating two new startup files for the bash shell. While logged in as user lfs , issue the following command to create a new .bash_profile :
When logged on as user lfs , or when switched to the lfs user using an su command with the “ - ” option, the initial shell is a login shell which reads the /etc/profile of the host (probably containing some settings and environment variables) and then .bash_profile .
The exec env -i.../bin/bash command in the .bash_profile file replaces the running shell with a new one with a completely empty environment, except for the HOME , TERM , and PS1 variables. This ensures that no unwanted and potentially hazardous environment variables from the host system leak into the build environment.
The new instance of the shell is a non-login shell, which does not read, and execute, the contents of the /etc/profile or .bash_profile files, but rather reads, and executes, the .bashrc file instead. Create the .bashrc file now:
cat > ~/.bashrc << "EOF"
set +h
umask 022
LFS=/mnt/lfs
LC_ALL=POSIX
LFS_TGT=$(uname -m)-lfs-linux-gnu
PATH=/usr/bin
if [ ! -L /bin ]; then PATH=/bin:$PATH; fi
PATH=$LFS/tools/bin:$PATH
CONFIG_SITE=$LFS/usr/share/config.site
export LFS LC_ALL LFS_TGT PATH CONFIG_SITE
EOF
The meaning of the settings in .bashrcset +h The set +h command turns off bash 's hash function.
Hashing is ordinarily a useful feature— bash uses a hash table to remember the full path to executable files to avoid searching the PATH time and again to find the same executable. However, the new tools should be used as soon as they are installed. Switching off the hash function forces the shell to search the PATH whenever a program is to be run. As such, the shell will find the newly compiled tools in $LFS/tools/bin as soon as they are available without remembering a previous version of the same program provided by the host distro, in /usr/bin or /bin .
umask 022
Setting the umask as we've already explained in Section 2.6, “Setting the $LFS Variable and the Umask.” LFS=/mnt/lfs The LFS variable should be set to the chosen mount point.
LC_ALL=POSIX
The LC_ALL variable controls the localization of certain programs, making their messages follow the conventions of a specified country. Setting LC_ALL to “POSIX” or “C” (the two are equivalent) ensures that everything will work as expected in the cross-compilation environment.
LFS_TGT=$(uname -m)-lfs-linux-gnu
The LFS_TGT variable sets a non-default, but compatible machine description for use when building our cross- compiler and linker and when cross-compiling our temporary toolchain. More information is provided by Toolchain Technical Notes.
PATH=/usr/bin
Many modern Linux distributions have merged /bin and /usr/bin . When this is the case, the standard PATH variable should be set to /usr/bin/ for the Chapter 6 environment. When this is not the case, the following line adds /bin to the path.
if [ ! -L /bin ]; then PATH=/bin:$PATH; fi
If /bin is not a symbolic link, it must be added to the PATH variable.
PATH=$LFS/tools/bin:$PATH
By putting $LFS/tools/bin ahead of the standard PATH , the cross-compiler installed at the beginning of Chapter 5 is picked up by the shell immediately after its installation. This, combined with turning off hashing, limits the risk that the compiler from the host is used instead of the cross-compiler.
CONFIG_SITE=$LFS/usr/share/config.site
In Chapter 5 and Chapter 6, if this variable is not set, configure scripts may attempt to load configuration items specific to some distributions from /usr/share/config.site on the host system. Override it to prevent potential contamination from the host.
export ...
While the preceding commands have set some variables, in order to make them visible within any sub-shells, we export them.
Important
Several commercial distributions add an undocumented instantiation of /etc/bash.bashrc to the initialization of bash . This file has the potential to modify the lfs user's environment in ways that can affect the building of critical LFS packages. To make sure the lfs user's environment is clean, check for the presence of /etc/bash.bashrc and, if present, move it out of the way. As the root user, run:
When the lfs user is no longer needed (at the beginning of Chapter 7), you may safely restore /etc/bash.bashrc (if desired).
Note that the LFS Bash package we will build in Section 8.36, “Bash-5.3” is not configured to load or execute /etc/bash.bashrc , so this file is useless on a completed LFS system.
For many modern systems with multiple processors (or cores) the compilation time for a package can be reduced by performing a "parallel make" by telling the make program how many processors are available via a command line option or an environment variable. For instance, an Intel Core i9-13900K processor has 8 P (performance) cores and 16 E (efficiency) cores, and a P core can simultaneously run two threads so each P core are modeled as two logical cores by the Linux kernel. As the result there are 32 logical cores in total. One obvious way to use all these logical cores is allowing make to spawn up to 32 build jobs. This can be done by passing the -j32 option to make :
make -j32
Or set the MAKEFLAGS environment variable and its content will be automatically used by make as command line options:
export MAKEFLAGS=-j32
Important
Never pass a -j option without a number to make or set such an option in MAKEFLAGS . Doing so will allow make to spawn infinite build jobs and cause system stability problems.
To use all logical cores available for building packages in Chapter 5 and Chapter 6, set MAKEFLAGS now in .bashrc :
Replace $(nproc) with the number of logical cores you want to use if you don't want to use all the logical cores.
Finally, to ensure the environment is fully prepared for building the temporary tools, force the bash shell to read the new user profile:
source ~/.bash_profile
4.5. About SBUs
Many people would like to know beforehand approximately how long it takes to compile and install each package.
Because Linux From Scratch can be built on many different systems, it is impossible to provide absolute time estimates.
The biggest package (gcc) will take approximately 5 minutes on the fastest systems, but could take days on slower systems! Instead of providing actual times, the Standard Build Unit (SBU) measure will be used instead.
The SBU measure works as follows. The first package to be compiled is binutils in Chapter 5. The time it takes to compile using one core is what we will refer to as the Standard Build Unit or SBU. All other compile times will be expressed in terms of this unit of time.
For example, consider a package whose compilation time is 4.5 SBUs. This means that if your system took 4 minutes to compile and install the first pass of binutils, it will take approximately 18 minutes to build the example package.
Fortunately, most build times are shorter than one SBU.
SBUs are not entirely accurate because they depend on many factors, including the host system's version of GCC. They are provided here to give an estimate of how long it might take to install a package, but the numbers can vary by as much as dozens of minutes in some cases.
On some newer systems, the motherboard is capable of controlling the system clock speed. This can be controlled with a command such as powerprofilesctl . This is not available in LFS, but may be available on the host distro.
After LFS is complete, it can be added to a system with the procedures at the BLFS power-profiles-daemon page.
Before measuring the build time of any package it is advisable to use a system power profile set for maximum performance (and maximum power consumption). Otherwise the measured SBU value may be inaccurate because the system may react differently when building binutils-pass1 or other packages. Be aware that a significant inaccuracy can still show up even if the same profile is used for both packages because the system may respond slower if the system is idle when starting the build procedure. Setting the power profile to “performance” will minimize this problem. And obviously doing so will also make the system build LFS faster.
If powerprofilesctl is available, issue the powerprofilesctl set performance command to select the performance profile. Some distros provides the tuned-adm command for managing the profiles instead of powerprofilesctl , on these distros issue the tuned-adm profile throughput-performance command to select the throughput-performance profile.
Note
When multiple processors are used in this way, the SBU units in the book will vary even more than they normally would. In some cases, the make step will simply fail. Analyzing the output of the build process will also be more difficult because the lines from different processes will be interleaved. If you run into a problem with a build step, revert to a single processor build to properly analyze the error messages.
The times presented here for all packages (except binutils-pass1 which is based on one core) are based upon using four cores (-j4). The times in Chapter 8 also include the time to run the regression tests for the package unless specified otherwise.
4.6. About the Test Suites
Most packages provide a test suite. Running the test suite for a newly built package is a good idea because it can provide a “sanity check” indicating that everything compiled correctly. A test suite that passes its set of checks usually proves that the package is functioning as the developer intended. It does not, however, guarantee that the package is totally bug free.
Some test suites are more important than others. For example, the test suites for the core toolchain packages—GCC, binutils, and glibc—are of the utmost importance due to their central role in a properly functioning system. The test suites for GCC and glibc can take a very long time to complete, especially on slower hardware, but are strongly recommended.
Note
Running the test suites in Chapter 5 and Chapter 6 is pointless; since the test programs are compiled with a cross-compiler, they probably can't run on the build host.
A common issue with running the test suites for binutils and GCC is running out of pseudo terminals (PTYs). This can result in a large number of failing tests. This may happen for several reasons, but the most likely cause is that the host system does not have the devpts file system set up correctly. This issue is discussed in greater detail at linuxfromscratch.org/lfs/faq.html#no-ptys. Sometimes package test suites will fail for reasons which the developers are aware of and have deemed non-critical.
Part III. Building the LFS Cross Toolchain and Temporary Tools
Important Preliminary Material
This part is divided into three stages: first, building a cross compiler and its associated libraries; second, using this cross toolchain to build several utilities in a way that isolates them from the host distribution; and third, entering the chroot environment (which further improves host isolation) and constructing the remaining tools needed to build the final system.
Important
This is where the real work of building a new system begins. Be very careful to follow the instructions exactly as the book shows them. You should try to understand what each command does, and no matter how eager you are to finish your build, you should refrain from blindly typing the commands as shown.
Read the documentation when there is something you do not understand. Also, keep track of your typing and of the output of commands, by using the tee utility to send the terminal output to a file. This makes debugging easier if something goes wrong.
The next section is a technical introduction to the build process, while the following one presents very important general instructions.
Toolchain Technical Notes
This section explains some of the rationale and technical details behind the overall build method. Don't try to immediately understand everything in this section. Most of this information will be clearer after performing an actual build. Come back and re-read this chapter at any time during the build process.
The overall goal of Chapter 5 and Chapter 6 is to produce a temporary area containing a set of tools that are known to be good, and that are isolated from the host system. By using the chroot command, the compilations in the remaining chapters will be isolated within that environment, ensuring a clean, trouble-free build of the target LFS system. The build process has been designed to minimize the risks for new readers, and to provide the most educational value at the same time.
This build process is based on cross-compilation. Cross-compilation is normally used to build a compiler and its associated toolchain for a machine different from the one that is used for the build. This is not strictly necessary for LFS, since the machine where the new system will run is the same as the one used for the build. But cross-compilation has one great advantage: anything that is cross-compiled cannot depend on the host environment.
About Cross-Compilation Note
The LFS book is not (and does not contain) a general tutorial to build a cross- (or native) toolchain. Don't use the commands in the book for a cross-toolchain for some purpose other than building LFS, unless you really understand what you are doing.
It's known installing GCC pass 2 will break the cross-toolchain. We don't consider it a bug because GCC pass 2 is the last package to be cross-compiled in the book, and we won't “fix” it until we really need to cross- compile some package after GCC pass 2 in the future.
Cross-compilation involves some concepts that deserve a section of their own. Although this section may be omitted on a first reading, coming back to it later will help you gain a fuller understanding of the process.
Let us first define some terms used in this context.
The build is the machine where we build programs. Note that this machine is also referred to as the “host.” The host is the machine/system where the built programs will run. Note that this use of “host” is not the same as in other sections.
The target is only used for compilers. It is the machine the compiler produces code for. It may be different from both the build and the host.
As an example, let us imagine the following scenario (sometimes referred to as “Canadian Cross”). We have a compiler on a slow machine only, let's call it machine A, and the compiler ccA. We also have a fast machine (B), but no compiler for (B), and we want to produce code for a third, slow machine (C). We will build a compiler for machine C in three stages.
Then, all the programs needed by machine C can be compiled using cc2 on the fast machine B. Note that unless B can run programs produced for C, there is no way to test the newly built programs until machine C itself is running. For example, to run a test suite on ccC, we may want to add a fourth stage:
Stage Build Host Target Action
Stage
Build
Host
Target
Action
1
A
A
B
Build cross- compiler cc1 using ccA on machine A.
2
A
B
C
Build cross- compiler cc2 using cc1 on machine A.
3
B
C
C
Build compiler ccC using cc2 on machine B.
4
C
C
C
Rebuild and test ccC using ccC on machine C.
In the example above, only cc1 and cc2 are cross-compilers, that is, they produce code for a machine different from the one they are run on. The other compilers ccA and ccC produce code for the machine they are run on. Such compilers are called native compilers.
Implementation of Cross-Compilation for LFS Note
All the cross-compiled packages in this book use an autoconf-based building system. The autoconf-based building system accepts system types in the form cpu-vendor-kernel-os, referred to as the system triplet. Since the vendor field is often irrelevant, autoconf lets you omit it.
An astute reader may wonder why a “triplet” refers to a four component name. The kernel field and the os field began as a single “system” field. Such a three-field form is still valid today for some systems, for example, x86_64-unknown-freebsd . But two systems can share the same kernel and still be too different to use the same triplet to describe them. For example, Android running on a mobile phone is completely different from Ubuntu running on an ARM64 server, even though they are both running on the same type of CPU (ARM64) and using the same kernel (Linux).
Without an emulation layer, you cannot run an executable for a server on a mobile phone or vice versa. So the “system” field has been divided into kernel and os fields, to designate these systems unambiguously. In our example, the Android system is designated aarch64-unknown-linux-android , and the Ubuntu system is designated aarch64-unknown-linux-gnu .
The word “triplet” remains embedded in the lexicon. A simple way to determine your system triplet is to run the config.guess script that comes with the source for many packages. Unpack the binutils sources, run the script ./config.guess , and note the output. For example, for a 32-bit Intel processor the output will be i686- pc-linux-gnu. On a 64-bit system it will be x86_64-pc-linux-gnu. On most Linux systems the even simpler gcc -dumpmachine command will give you similar information.
You should also be aware of the name of the platform's dynamic linker, often referred to as the dynamic loader (not to be confused with the standard linker ld that is part of binutils). The dynamic linker provided by package glibc finds and loads the shared libraries needed by a program, prepares the program to run, and then runs it. The name of the dynamic linker for a 32-bit Intel machine is ld-linux.so.2 ; it's ld-linux-x86-64.so.2 on 64-bit systems. A sure-fire way to determine the name of the dynamic linker is to inspect a random binary from the host system by running: readelf -l <name of binary> | grep interpreter and noting the output. The authoritative reference covering all platforms is in a Glibc wiki page.
There are two key points for a cross-compilation:
When producing and processing the machine code supposed to be executed on “the host,” the cross-toolchain must be used. Note that the native toolchain from “the build” may be still invoked to generate machine code supposed to be executed on “the build.” For example, the build system may compile a generator with the native toolchain, then generate a C source file with the generator, and finally compile the C source file with the cross-toolchain so the generated code will be able to run on “the host".
With an autoconf-based build system, this requirement is ensured by using the --host switch to specify “the host” triplet. With this switch the build system will use the toolchain components prefixed with <the host triplet> for generating and processing the machine code for “the host”; e.g. the compiler will be <the host triplet>-gcc and the readelf tool will be <the host triplet>-readelf .
The build system should not attempt to run any generated machine code supposed to be executed on “the host.”
For example, when building a utility natively, its man page can be generated by running the utility with the --help switch and processing the output, but generally it's not possible to do so for a cross-compilation as the utility may fail to run on “the build”: it's obviously impossible to run ARM64 machine code on a x86 CPU (without an emulator).
With an autoconf-based build system, this requirement is satisfied in “the cross-compilation mode” where the optional features requiring to run machine code for “the host” during the build time are disabled. When “the host” triplet is explicitly specified, “the cross-compilation mode” is enabled if and only if either the configure script fails to run a dummy program compiled into “the host” machine code, or “the build” triplet is explicitly specified via the --build switch and it's different from “the host” triplet.
In order to cross-compile a package for the LFS temporary system, the name of the system triplet is slightly adjusted by changing the "vendor" field in the LFS_TGT variable so it says "lfs" and LFS_TGT is then specified as “the host” triplet via --host , so the cross-toolchain will be used for generating and processing the machine code running as a part of the LFS temporary system. And, we also need to enable “the cross-compilation mode”: despite “the host” machine code, i.e. the machine code for the LFS temporary system, is able to execute on the current CPU, it may refer to a library not available on the “the build” (the host distro), or some code or data non-exist or defined differently in the library evenif it happens to be available.
When cross-compiling a package for the LFS temporary system, we cannot rely on the configure script to detect this issue with the dummy program: the dummy only uses a few components in libc that the host distro libc likely provides (unless, maybe the host distro uses a different libc implementation like Musl), so it won't fail like how the really useful programs would likely. Thus we must explicitly specify “the build” triplet to enable “the cross-compilation mode.” The value we use is just the default, i.e. the original system triplet from config.guess output, but “the cross-compilation mode” depends on an explicit specification as we've discussed.
We use the --with-sysroot option when building the cross-linker and cross-compiler, to tell them where to find the needed files for “the host.” This nearly ensures that none of the other programs built in Chapter 6 can link to libraries on “the build.” The word “nearly” is used because libtool , a “compatibility” wrapper of the compiler and the linker for autoconf-based build systems, can try to be too clever and mistakenly pass options allowing the linker to find libraries of “the build.” To prevent this fallout, we need to delete the libtool archive ( .la ) files and fix up an outdated libtool copy shipped with the Binutils code.
Stage
Build
Host
Target
Action
1
pc
pc
lfs
Build cross- compiler cc1 using cc-pc on pc.
2
pc
lfs
lfs
Build compiler cc-lfs using cc1 on pc.
3
lfs
lfs
lfs
Rebuild (and maybe test) cc-lfs using cc- lfs on lfs.
In the preceding table, “on pc” means the commands are run on a machine using the already installed distribution. “On lfs” means the commands are run in a chrooted environment.
This is not yet the end of the story. The C language is not merely a compiler; it also defines a standard library. In this book, the GNU C library, named glibc, is used (there is an alternative, "musl"). This library must be compiled for the LFS machine; that is, using the cross-compiler cc1. But the compiler itself uses an internal library providing complex subroutines for functions not available in the assembler instruction set. This internal library is named libgcc, and it must be linked to the glibc library to be fully functional. Furthermore, the standard library for C++ (libstdc++) must also be linked with glibc. The solution to this chicken and egg problem is first to build a degraded cc1-based libgcc, lacking some functionalities such as threads and exception handling, and then to build glibc using this degraded compiler (glibc itself is not degraded), and also to build libstdc++. This last library will lack some of the functionality of libgcc.
The upshot of the preceding paragraph is that cc1 is unable to build a fully functional libstdc++ with the degraded libgcc, but cc1 is the only compiler available for building the C/C++ libraries during stage 2. As we've discussed, we cannot run cc-lfs on pc (the host distro) because it may require some library, code, or data not available on “the build” (the host distro). So when we build gcc stage 2, we override the library search path to link libstdc++ against the newly rebuilt libgcc instead of the old, degraded build. This makes the rebuilt libstdc++ fully functional.
In Chapter 8 (or “stage 3”), all the packages needed for the LFS system are built. Even if a package has already been installed into the LFS system in a previous chapter, we still rebuild the package. The main reason for rebuilding these packages is to make them stable: if we reinstall an LFS package on a completed LFS system, the reinstalled content of the package should be the same as the content of the same package when first installed in Chapter 8. The temporary packages installed in Chapter 6 or Chapter 7 cannot satisfy this requirement, because some optional features of them are disabled because of either the missing dependencies or the “cross-compilation mode.”
Additionally, a minor reason for rebuilding the packages is to run the test suites.
Other Procedural Details
The cross-compiler will be installed in a separate $LFS/tools directory, since it will not be part of the final system.
Binutils is installed first because the configure runs of both gcc and glibc perform various feature tests on the assembler and linker to determine which software features to enable or disable. This is more important than one might realize at first. An incorrectly configured gcc or glibc can result in a subtly broken toolchain, where the impact of such breakage might not show up until near the end of the build of an entire distribution. A test suite failure will usually highlight this error before too much additional work is performed.
Binutils installs its assembler and linker in two locations, $LFS/tools/bin and $LFS/tools/$LFS_TGT/bin . The tools in one location are hard linked to the other. An important facet of the linker is its library search order. Detailed information can be obtained from ld by passing it the --verbose flag. For example, $LFS_TGT-ld --verbose | grep SEARCH will illustrate the current search paths and their order. (Note that this example can be run as shown only while logged in as user lfs . If you come back to this page later, replace $LFS_TGT-ld with ld ).
The next package installed is gcc. An example of what can be seen during its run of configure is:
checking what assembler to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/as
checking what linker to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/ld
This is important for the reasons mentioned above. It also demonstrates that gcc's configure script does not search the PATH directories to find which tools to use. However, during the actual operation of gcc itself, the same search paths are not necessarily used. To find out which standard linker gcc will use, run: $LFS_TGT-gcc -print-prog-name=ld .
(Again, remove the $LFS_TGT- prefix if coming back to this later.)
Detailed information can be obtained from gcc by passing it the -v command line option while compiling a program. For example, $LFS_TGT-gcc -vexample.c (or without $LFS_TGT- if coming back later) will show detailed information about the preprocessor, compilation, and assembly stages, including gcc 's search paths for included headers and their order.
Next up: sanitized Linux API headers. These allow the standard C library (glibc) to interface with features that the Linux kernel will provide.
Next comes glibc. This is the first package that we cross-compile. We use the --host=$LFS_TGT option to make the build system to use those tools prefixed with $LFS_TGT- , and the --build=$(../scripts/config.guess) option to enable “the cross-compilation mode” as we've discussed. The DESTDIR variable is used to force installation into the LFS file system.
As mentioned above, the standard C++ library is compiled next, followed in Chapter 6 by other programs that must be cross-compiled to break circular dependencies at build time. The steps for those packages are similar to the steps for glibc. At the end of Chapter 6 the native LFS compiler is installed. First binutils-pass2 is built, in the same DESTDIR directory as the other programs, then the second pass of gcc is constructed, omitting some non-critical libraries.
Upon entering the chroot environment in Chapter 7, the temporary installations of programs needed for the proper operation of the toolchain are performed. From this point onwards, the core toolchain is self-contained and self-hosted.
In Chapter 8, final versions of all the packages needed for a fully functional system are built, tested, and installed.
General Compilation Instructions
Caution
During a development cycle of LFS, the instructions in the book are often modified to adapt for a package update or take the advantage of new features from updated packages. Mixing up the instructions of different versions of the LFS book can cause subtle breakages. This kind of issue is generally a result from reusing some script created for a prior LFS release. Such a reuse is strongly discouraged.
If you are reusing scripts for a prior LFS release for any reason, you'll need to be very careful to update the scripts to match current version of the LFS book.
Here are some things you should know about building each package:
Several packages are patched before compilation, but only when the patch is needed to circumvent a problem.
A patch is often needed in both the current and the following chapters, but sometimes, when the same package is built more than once, the patch is not needed right away. Therefore, do not be concerned if instructions for a downloaded patch seem to be missing. Warning messages about offset or fuzz may also be encountered when applying a patch. Do not worry about these warnings; the patch was still successfully applied.
During the compilation of most packages, some warnings will scroll by on the screen. These are normal and can safely be ignored. These warnings are usually about deprecated, but not invalid, use of the C or C++ syntax. C standards change fairly often, and some packages have not yet been updated. This is not a serious problem, but it does cause the warnings to appear.
Check one last time that the LFS environment variable is set up properly:
echo $LFS
Make sure the output shows the path to the LFS partition's mount point, which is /mnt/lfs , using our example.
Finally, two important items must be emphasized:
Important
The build instructions assume that the Host System Requirements, including symbolic links, have been set properly:
bash is the shell in use.
sh is a symbolic link to bash.
/usr/bin/awk is a symbolic link to gawk.
/usr/bin/yacc is a symbolic link to bison, or to a small script that executes bison.
Important
Here is a synopsis of the build process.
Place all the sources and patches in a directory that will be accessible from the chroot environment, such as /mnt/lfs/sources/
.
Change to the /mnt/lfs/sources/ directory.
For each package:
Using the tar program, extract the package to be built. In Chapter 5 and Chapter 6, ensure you are the lfs user when extracting the package.
Do not use any method except the tar command to extract the source code. Notably, using the cp -R command to copy the source code tree somewhere else can destroy timestamps in the source tree, and cause the build to fail.
Change to the directory created when the package was extracted.
Follow the instructions for building the package.
Change back to the sources directory when the build is complete.
Delete the extracted source directory unless instructed otherwise.