What is paging and segment

Memory management

i.Direct addressing
iii.Virtual memory management
  Page or segment change algorithms

The main memory (main memory, Memory) is a resource without which no program can be executed. Memory management is therefore a central task of every operating system. It organizes the allocation of main memory to user jobs or tasks, both program memory and data memory. It must keep records of which memory areas are currently free and which are occupied.

From the beginning of computer history up to the present day, the phenomenon can be observed that the demand for main memory is always greater than the available physical ("physical") memory. The available memory is almost always too small to hold a large program or several programs at the same time. A number of solutions to this problem have been developed over the course of the various generations of computers.

The more modern methods are based on the fact that only a few storage locations are actually accessed at any given point in time, while the others are only available for later access (which may never take place). The basic idea is to expand the storage capacity with the help of external mass storage devices (hard drives), from which the required information can be loaded into the main memory if required. In this case, the memory management must determine in the respective situation which program sections are to be loaded and which are to remain in the "background memory" (i.e. on the hard disk). In the case of multitasking, in particular, there are many different tasks for memory management, because there must be several independent programs in the main memory that may have to be swapped out and swapped in frequently.

It becomes even more complicated when several CPUs share a common memory.

The tasks of memory management also include providing mechanisms for protecting against unauthorized memory access from other programs. The following list of memory management principles begins with the simplest - mostly older ones and ends with the methods of "memory management" in general-purpose computers that are common today.

Direct addressing

Simple universal operating systems for single-program operation manage the entire memory mostly as a block, which consists of a system area for the operating system and a user area. The current program occupies a contiguous part of the user area. The remaining memory is free: "Related individual allocation" or "single contiguous allocation".

  • Dedicated systems. Since the beginning of the user area is fixed in such a system, the programs can be written, compiled and linked in such a way that they can run precisely in these memory addresses. This makes the tools you need very easy. This was the usual procedure at the beginning of microprocessor applications and is still sufficient today for simple problems.
  • Universal systems. The most important components of such simple operating systems are a so-called "job monitor", which manages a job queue, and a "loader", which stores the user program to be started together with (parts of) a program library from a background memory (mostly from a hard drive) loads. Since all addresses of the loaded programs (or program parts) are already available in the object code as absolute addresses, such a loader is also called an "absolute loader" (Absolute loader).

The following applies:

  • At the Single-program operation If there is only one process in memory, it is allowed to use the entire memory. This method is no longer common today. Only microprocessor controls are an exception. Even in single-program mode, the operating system and user process share the memory.
  • At the Multi-program operation several processes are loaded into the memory and assigned to the CPU via the scheduler.

Problem with multi-program operation:

A process must never influence the memory area of ​​other processes. Hardware and software measures are necessary for this purpose (memory protection). The multi-program operation also increases the CPU utilization. If a process is waiting for the end of an I / O operation, the CPU runs idle in single-program mode. In multi-program operation, other processes can compute during this time.

The simplest way is to divide it into fixed parts (= partitions), which do not necessarily have to be the same size. In addition to partitioning, measures to relocate (move) the programs during loading and to protect the memory are necessary (see below).

Processes may only access areas within the partition (applies to code and data!), As otherwise other processes could be influenced in an impermissible manner. The memory management unit must therefore have appropriate protective functions and report access to "forbidden" areas to the operating system kernel. One solution, for example, is to have two additional registers in the processor, the base and limit registers. The base register contains the start address of the partition assigned to the active process and the limit register its length. All addressing processes in the program are relative to the base register. The scheduler changes the two registers.


With the "related individual allocation", an addressing problem arises for any library routines to be loaded if the user programs are of different lengths, because then these library programs come to be at different addresses. A possible solution is a dedicated, fixed area for these routines, which, however, leads to the memory being fragmented and an unnecessarily large amount of unused memory being created.

The more flexible solution is to prepare the library routines in such a way that the definition of the storage spaces they require can be postponed until the time of loading. You need a "shifting loader" (Relocating loader), which can then also be used to load the user programs.

The preparation of the programs consists in marking the address parts of the machine commands as absolute or relative addresses. Thus, the task of the relocating loader is to only add a constant offset to the relative address information (the loading start address of the program) and to store this absolute address information; The prerequisite is, of course, that the programs are written as if they would always be in memory from address zero.

At the time of loading, each relative address is converted into an absolute one, which means that the loading process takes more time. The loaded program can no longer be moved if it is in main memory.

A further increase in flexibility is given if the conversion into effective addresses is not carried out at the time of loading, but rather immediately when the command is executed. A prerequisite for this is a processor that allows the corresponding types of addressing of the commands. For this purpose, an address arithmetic unit must be present in the processor, which either adds the content of a base address register (loaded by the loader; "base register-relative addressing") or the contents of the program counter ("position independent code", "relocatable code" by "PC") -relative addressing "). The address details in the commands are no longer modified by the loader, the loading process is much faster. The program can still be moved in the memory after loading.

Virtual memory management

One speaks of virtual storage technology (virtual memory), if the memory address space to which the instructions of the processor refer is separate from the real address space of the main memory in which the program is located during processing.

The address space to which the program commands refer is the logical address space. The address space of the real main memory is the physical address space. Programs can be written independently of the physical address space. The logical address space describes an imaginary, non-real working memory, which is referred to as virtual memory.

Usually the logical address space is larger than the physical address space (usually much larger). The virtual memory is mapped to the disk. A program has to be loaded into the main memory for execution. Because of the sequential processing, not all parts of the program and, similarly, not the entire data area of ​​the program are required within a certain time interval.

It is therefore sufficient to keep only the required program and data area parts (= "working set") in the working memory. The programs and data are broken down into individual sections, which are then only loaded into the memory when required (when required by the CPU).

  • Programs and data areas are not limited in their length by the real size of the main memory
  • Several programs can be executed at the same time, the total length of which exceeds the size of the main memory.

A transformation (i.e. mapping) of the logical addresses into physical addresses is required to process the individual commands of the program.



The logical address in the commands remains unchanged even after loading into the main memory. The transformation only takes place when the command is executed: there is a dynamic address transformation. If there is an address in a program or data section that is not in the main memory, the corresponding section is reloaded.

A combination of hardware and software must be used to implement the virtual memory management. A specially designed operating system takes care of reloading the program and data sections. The basic requirement for implementation is the ability of the CPU to interrupt a running command (when reloading) and, after reloading, to set up the command again and then to execute it in full. The virtual storage technology is completely transparent for the user. Both the reloading of program and data sections as well as the address conversion take place automatically and need not be taken into account when programming the application.


The logical address space is divided into sections of variable size according to the logical units of the program (subroutines, data areas, etc.). The sections are called segments. The minimum / maximum segment size depends on the respective system (typically: 256 bytes to 64 KByte). This type of memory organization is mainly supported by processors from Intel.

The logical address consists of 2 parts:

  • Segment number (most significant part)
  • Word address (offset, displacement; less significant part; address relative to the start of the segment).

For the segments loaded into the main memory, the respective real start address of the segment is recorded in a segment table (base address of the segment). In multiprogramming mode, separate tables are usually provided for the various "jobs" so that there is no confusion between the segment numbers of different jobs. Thus, before the segment table can be accessed, the content of a job-specific segment table register must be added to the segment number.

Each time the jobs are switched (context switch), the segment table register must therefore also be reloaded. In general, the segment table also contains information about the size of the segments (specification of the last physical address of the segment or the segment size directly). In this way, incorrect accesses that lead out of the segment can be recognized and prevented.

Furthermore, status and access information is assigned to each segment in the segment table (recognition of unloaded segments, prevention of unauthorized access). The segment is the smallest exchange unit. If there is no more memory available for reloading, an existing segment that is currently not required must be removed ("demand segment swapping").

Problems (at the same time disadvantages of segmentation):

  • Memory fragmentation: There are unoccupied gaps between the segments in the main memory, which arise when a segment is exchanged for a smaller segment or removed.
  • There may be cases in which the available contiguous memory space is not large enough to accommodate a segment to be reloaded, even though there is enough free memory space overall.
  • Reorganization of the memory allocation by the operating system is necessary (pushing together the segments).
  • Cumbersome exchange algorithm: The associated system program itself occupies a lot of memory space, additional time is required for execution.

Page addressing (paging)

Logical and physical address space are divided into sections of equal length, called pages (typical page sizes: 512 bytes to 4 KBytes). A page represents the smallest exchange unit. A page is loaded or reloaded as required, i.e. when an address on this page is referenced ("demand paging"). If there are still physical pages free, the next free physical page is assigned to a logical page to be reloaded. If all physical pages are already occupied, a logical page must be swapped out (page change). The operating system program "Page-Supervisor" is responsible for this. A program can be in the main memory, broken up into pages, i.e. divided at page boundaries regardless of any logical boundaries.

The logical address is broken down into

  • logical page number (page address, more significant part)
  • Word address (line address, least significant part).

The word address is the relative address to the top of the page. It can be adopted unchanged in the physical address. The physical address also consists of two parts:

  • physical page number (page address, most significant part)
  • Word address (less significant part) that is adopted unchanged from the logical address.

Because of the fixed page size, the boundary between word address and page number is always in the same place. The physical page number must be determined from the logical page number using the address transformation.

The address transformation only takes place at the time a command is executed, which is why one speaks of a dynamic address translation. It usually happens using an address translation table (address translation memory, translation buffer) which contains the assignment pairs (logical page number, physical page number) for the pages in the main memory.

So-called associative memories (= CAM, Content Addressable Memory), which are accessed not via an address but via cell contents, are particularly advantageous for this purpose. A search word (key) is created instead of the address and the result is a hit display, possibly also no hit. The key in this case is the logical page number, 'hit' means there is a corresponding entry in the table, in the special case of address translation: the logical page sought is loaded, the actual information of the table entry, the physical page number, is read out.

If the associative memory does not report a hit, i.e. the page is not in the working memory ("Page fault"), the loading of this page is initiated (page change and entry in the table!).

Advantages of the paging method (compared to segmentation):

  • no memory fragmentation: any free physical page can accommodate any logical page.
  • no need to search for a "matching hole" in memory for a page to be reloaded. Makes the software easy to change pages.
  • significantly less time required for the transfer between secondary storage ("page file") and main storage.
  • in the main memory can generally there are more active programs at the same time (the main memory is not occupied by program sections that are rarely or not at all required).

Conclusion: Paging is more suitable than segmentation for the implementation of a virtual storage system.

Page change (segment change) algorithms

The characteristic of virtual storage technology is the loading or reloading of program sections (pages or segments) when required during program execution. The basic prerequisite for their implementation is therefore the ability of the CPU to interrupt a running command (if a page or segment has to be reloaded) and, after reloading, to set up the command again and then to execute it in full. There are various strategies for selecting the page (or segment) that is to be removed from the main memory when changing pages.

  • In the most common strategy, the side (resp. Longest inaccessible segment selected: Least Recently Used (LRU). The assumption here is that the page that has not been used for the longest time will in all likelihood not be needed in the future.
  • Pages or segments that have not been written to, i.e. the content of which has not been changed, do not need to be saved back to the page file, but can be overwritten immediately by the new page (indicated by "dirty bit" in the page table). This results in a shortening of the page change time. Pages that are not changed are, for example, pages that contain program code. If such pages are required in the main memory again, they can be reloaded directly from the program file (EXE file).