Note 118057 - Flexible Configuration of the Spool Service

Version / Date 7 / 2001-01-11
Priority Recommendations/additional info
Category Consulting
Primary Component BC-CCM-PRN Print and Output Management
Secondary Components


General Information

Other terms

Spool Server, alternate server, queue, cache, spool work process, server classification, logical server, server mapping, emergency replacement server, load balancing


Flexible Configuration of the Spool Service of an R/3 System

Table of Contents

1 Introduction
2 Arranging the Spool Service of an Instance
2.1 Separate Administration of a Request Queue for the Spool Service
2.2 Central Administrative Actions
2.3 Multiple Spool Work Processes
3 Flexible Configuration of Server Architecture
3.1 Classifying Spool Servers and Output Devices
3.2 Logical Servers
3.3 Alternate Servers
4 Queues and Caches
4.1 Spool Request Queue
4.2 Device Cache
4.3 Host Spool Request List
4.4 Server Cache

1   Introduction

The formatting component is an important element of the R/3 spool system.  During formatting, the internal R/3 format, which is device (type) -independent, is converted into the device specific printer language.  This task is performed by a special type of work process, called the spool work process.  In addition to formatting, the spool work process also transfers the formatted data to an external spool system that handles the final output of a document on an output device.

Since the external system and the output device managed by this system might only be available on one or few systems, formatting must usually be done by a particular R/3 server.  For example, output devices that use access method 'L' are permanently connected to a dedicated computer and can only be served by this computer.  Output devices that use the network access method 'U' can in principal be addressed by every computer, however, routers and firewalls may restrict access.  To assign the formatting and transfer process to a particular R/3 server, you will have to assign every R/3 output device to one particular server.  Every request to these output devices will then be processed by the spool service (a spool work process of the assigned server).

Up to Release 3.X some substantial problems occurred during this process:

      a) Load problems on the spool server, if too many requests were produced or too many output devices had been assigned.

           Only one spool work process could be configured for one instance.  At the same time, the work process was used for administrative tasks.

      b) Server names were usually valid only in a single R/3 system.

           The names of the R/3 servers are generated automatically.   They consist of the host name, the system name (e.g. C11), and the instance number. The names are only valid in one particular R/3 system.  A spool server is assigned to each output device that needs a spool work process. Therefore definitions of output devices always had to be adjusted after they had been transported into another system, even if the output device could be addressed from other R/3 Systems residing on other computers when using a network access method.

      c) If an instance failed, the assigned output devices would fail as well.
      d) Output devices assigned to a server could only be reassigned in a bunch to another server.

           All the output devices of a spool server could be reassigned semi-automatically to other servers; however, it was not possible to restore the original server assignment afterwards.

      e) Servers were assigned exclusively

As of R/3 Release 4.0, most of the problems mentioned were solved by using a general concept.  To achieve this, the spool service on an instance was reorganized and you can configure the server architecture more flexibly, in order to improve the assignment of the spool servers to the output devices.  You can now configure several spool work processes within an instance.  Simultaneously, administrative actions within the spool service can be performed more effectively, which solved problem a).  By using logical servers and a mapping mechanism between the servers, a general concept is available for solving the problems described above.

2   Arranging the Spool Service of an Instance

As of Release 4.0, you can use the spool service of an instance more flexibly.  By using a request queue within the spool service, administrative actions can be processed independently of the number of waiting requests.

All spool servers in an R/3 System coordinate their actions, so that central actions within the spool system for processing output requests will only be executed by a single server.  Consequently, work process load and, most importantly, data base load caused by searching in tables is avoided.

Configuring multiple spool work processes for one instance helps cope with rising output load on an instance.

2.1   Separate Administration of a Request Queue for the Spool Service

In addition to the request queue within the dispatcher, the spool service also maintains a separate job queue as of Release 4.0, which is used by spool work processes.  This queue is also used by the spool work processes to coordinate their actions.  The spool work processes transfer requests from the dispatcher queue into the spool request queue by priority.  Only when the dispatcher queue is empty or the spool request queue is full, output requests are processed.

When the internal queue is full, requests are processed even if there are requests waiting in the dispatcher queue.  During this time all further requests accumulate in the dispatcher queue, and they cannot be transferred in the spool request queue.  If there is an overflow in the internal queue, requests cannot be lost.  The spool service can handle jobs again only when there is once again space in the internal queue. The dispatcher absorbs output requests if the spool request queue is too small, or if there is a short-term, unexpected job load.

By transferring requests into a self-administered queue, the spool service can control the order of the request processing independently of the dispatcher.  You can execute administrative actions, which are initiated by separate requests to the spool service, before output requests are processed.  At the same time, you can have output requests processed according to their priority.

2.1.1   Direct Execution of Administrative Actions

When requests are transferred from the dispatcher queue into the spool request queue only output requests are transferred.  Requests for executing administrative requests are processed immediately by the current spool work process. Such actions will no longer be obstructed or delayed by long or too many output requests.  The system ensures that, for example, the external host spool systems can regularly be queried, even if many or long output requests are produced within a short time. If there is an overflow in the spool request queue, the dispatcher queue will be used additionally to queue further requests. Therefore, further requests cannot use the advantages of the internal queue until there is space created by processing requests.

In addition to the preferential handling of administrative actions, certain actions are executed globally and not for individual servers (see 2.2).  The cost of these actions is reduced as far as possible, because the same actions are not executed on all spool servers in parallel.

Administrative operations for resetting cache entries of the spool system, which up to Release 3.X were executed by the spool work process, have been shifted to dialogue work processes (see 4).

2.1.2   Output Requests with Priorities

The administration of the request queue by the spool work processes allows the insertion of any new requests.  You can organize the waiting requests according to their priority.

However, considering priorities can alter the order of requests during processing.  This means, that requests, which are sent to an output device at the same time can be interrupted by other requests if those requests have different priorities.

Priorities for spool and output requests have already been used for a long time.  By default, the value 3 is used.  A lower number means a higher priority.  Values up to 9 can be used.  Moreover, in Transaction SP01 you can enter values up to 99.  These are interpreted correctly, however with a default of 3 this distinction is rather irrelevant.

Priorities have so far not been considered during processing and not been used outside of the spool system, so for the time being they are only displayed within the spool system.  Components related to R/3 (e.g. ABAP) do not provide corresponding input fields.  Also all structures outside of the spool system that receive output parameters do not know such a field.  Therefore, you can choose a request priority different from the default value only in the output management (SP01). When creating spool requests, either in the dialogue, when scheduling batch jobs or directly from the program using the appropriate ABAP statements, this specification cannot be made.  Also, so far there are no authorizations for assigning high priorities.

2.1.3   Queue Overflow

The size of the internal spool request queue is limited.  An overflow of this queue can occur if requests are created faster than they can be processed (see 4.1).  If there is an overflow in the spool request queue, the dispatcher queue is automatically used to queue further requests, so requests cannot be lost.  However, further requests cannot use the advantages of the internal queue until there is space created by processing the requests.  Priorities of the following requests are not considered against the output requests already in the internal queue.  Also the preferential handling of administrative tasks is not possible any longer, because these requests pile up in the dispatcher queue.

2.2   Central Administrative Actions

Beginning with Release 4.0, global operations that had so far been executed decentrally on all servers at the same time are now be executed only once on exactly one server.  That means the database load is reduced during cyclical tasks.  Before release 4.0 cyclical tasks meant the more spool servers that existed in a system, the more often those tasks would be executed because they would have to be executed on each server.  Now, only the initiation takes place on each server.  All servers coordinate their actions to decide where and when the cyclical action is to be executed.  This coordination is based on a time stamp in UTC time.

The coordination occurs locally using the spool semaphore 43 (SEM_RSPO_CACHE) or an internal binary lock, which prevents several work processes from trying to initiate global actions on one server. Globally, the lock occurs using an enqueue lock and the enqueue lock object ESTSPSV with the lock argument _GLOBAL_ is used.  This object is also used for locking server definitions during maintenance so you should not define a logical server with the name _GLOBAL_(see 3.2). This enqueue lock guarantees that there is only one global action at one time.

If a global action is to be executed, all the spool servers try to initiate it.  If a server realizes that such an action is already being executed, it will terminate its attempt and continue with its normal tasks.  Every time a global action is executed all the queued global actions are executed before the global lock is reset.  This guarantees that this action is executed exactly once as long as at least one spool server exists in the system.

In addition to the spool servers that execute global actions regularly, you can redirect output requests interactively in Transaction SPAD. The same exclusion mechanism is used here.  However, an action is stopped if a global action cannot be initiated.

As of Release 4.0, two global actions exist: a periodic search for lost requests and a redirection of requests if a server fails.  As of release 4.6A, the reorganization of the spool system (deletion of old requests) will be added.  Any of these actions is initiated separately by a server.  Because of the adjustment, however, these actions will only be executed by one or - in case of a redirection - by few servers.

2.2.1   Searching for Lost Requests

Before an output request can be processed by a spool work process it runs through several queues all with a limited capacity.  The Commit Message queue buffers messages in the application transaction until the transaction is committed.  The messages are delayed until the entries are written to the database. The dispatcher request queue on the target server buffers the incoming messages until they can be handed over to a free work process.  At this time the requests are being transferred to the internal queue of the spool service (see 2.1), so this queue should normally be relatively empty.  Only in longer periods of overload it takes in requests that were sent to the spool service.

If an overflow occurs, messages and requests on their way to the spool work process can be lost.  Since requests are recorded in the database, they are not lost completely.  However, the spool service on the target instance does not notice that these requests exist because it does not receive the corresponding messages.  That is why the spool work process searches periodically (approx. every 20 min.) for these kinds of requests.  It finds the unprocessed requests whose messages were lost, and all the requests which have not been processed yet.  Up to now the requests, which had been found during this scan, were processed explicitly.  To find all the requests, the scan was repeated until all unprocessed requests had been processed.  It was necessary to repeat this scan because every time a request was processed there had to be a Commit so that this processing could not be executed during one open select on the database.  If further output requests were created for this server, this kind of 'emergency' processing might not have happened at all.  As a result, the messaging mechanism was completely by-passed, there was an overflow in the dispatcher queue, the query of the host spool system was not started and all the requests were only processed via a search in the database.

By using the internal spool request queue, you can now restore all missing messages and start processing afterwards.  This method has been available since release 3.1H.  However, missing requests were still searched for separately with selects on each server.  The more spool servers that existed in one system, the more often the search on the data base was executed, even if a long interval had been defined for the scan.

Since you can now execute global actions, there is only one search for unprocessed requests in a system according to the selected period.  New request messages are sent to the respective spool servers and then directly transferred by the spool work processes from the dispatcher queue into the internal request queue.  Missing requests are automatically added to the internal queue, and no further entries in the queue are necessary if messages have been sent repeatedly.

To check the time when the last search took place, select Settings -> Spool System in transaction SPAD.  According to the selected period, each spool server initiates a global action, which the spool servers coordinate.  A new scan is started globally when the period after the last known scan is over.

2.2.2   Redirecting Output Requests if a Server Fails

When you arrange the spool server architecture (see 3) you can define alternate servers which take care of the requests if the server that was originally designated fails.  Requests are redirected automatically according to the current system architecture.  New requests are sent directly to the alternate server.  However, requests which already exist and which have been directed to a server are redirected if that server fails.

This redirection is also a global action.  If a server fails or is restarted, all existing servers are informed by a dialogue work process, which performs a corresponding handling routine.  If both the failed server and the server that executes the handling are spool servers, the spool service is instructed to redirect the requests. Each spool server then initiates a global redirection.  Since the spool servers coordinate their actions only one redirection will be executed within one system at the same time.  If some initiations last longer than the redirection - for example if requests are being processed - several redirections might be executed.  In that case, all requests are directed to active spool servers so that existing requests that have not been processed yet will not be redirected.  As long as at least one active spool server can be reached, there will definitely be a redirection.

2.3   Multiple Spool Work Processes

The spool service of an R/3 instance has two main tasks.  First it manages the processing of output requests, especially formatting output data and transferring it to the host spool system.  Secondly, it also has to query the status of an output request which has been handed over to the host spool system, and to update this status within the R/3 System, until processing of output request in the host spool system has been finished.  For this purpose, each device queue in the host spool system which received requests from the spool work process is queried periodically.  This is repeated for each queue until it does not contain anymore requests for the instance concerned.

Beginning with Release 4.0, you can now configure the spool service of an instance with several spool work processes.  This is analogous to the batch and dialogue service.  However, in contrast to the conventional services, the spool services need to to coordinate querying at the host spool systems in a complex way. Requests can still be sent to an output device in the order they were generated.

2.3.1   Querying the Status at the Host Spool Systems

The query is started via a periodic request to the spool service that is created by the dispatcher every minute.  With the profile parameter rspo/rspoget2/min_alarm_intervall you can select how often this alarm message is transformed into a query.  The period of the alarm message should be several times longer than the time interval set in the profile parameter.

Before Release 4.X, there were often problems with the status query because sometimes it could last a long time.  Since there was only one spool work process configurable for one instance no output requests could be processed during that time.

If multiple work processes are configured, the dispatcher can deliver the message to each of these work processes to process requests.  If the work process actions are not coordinated, several queries could take place simultaneously.  During long-running queries the whole spool service could be locked. Since Release 4.0, the spool work processes coordinate their actions to guarantee that a maximum of one work process queries the host spool system.  The configured time interval between the end of the query and the start of the next query is also maintained.

Even during a long-running query the spool service is still available and can process requests.  A maximum of one work process can be withdrawn temporarily from request processing for querying while the others can still be used to process output requests.  Since every spool work process has the potential for querying as well as for processing, the list of the requests in the host spool system must be kept in the shared memory to grant every process access to this list.  This resource is limited, so that only a limited number of requests can be managed in the host spool system (see 4.3).

2.3.2   Printing in Order-of-Generation

If multiple work processes are configured for the spool service of an instance, the dispatcher equally distributes requests to all these work processes. The spool request queue is built up by the work processes and not by the dispatcher.  The dispatcher accepts requests from the dispatcher and puts them into the request queue until the dispatcher is empty or the internal spool request queue is full.  Only then is the request processed.  After processing, the loop is restarted.   Problems with Printing in Order of generation

There are two problems with printing requests in the order they were generated:

  • The work processes work independently of each other and the dispatcher delivers every idle work process with an unprocessed request.  That is why requests cannot be sent from the dispatcher queue into the spool request queue with the standard mechanism for work processes.
  • Since requests can be processed in parallel, they can overtake each other during processing and transferring to the host spool system even if the processing time is the same.  This also means that printing the requests in the order they were generated cannot be guaranteed.   Accepting Requests

If requests in the dispatcher queue are transferred to the spool request queue in the order they were generated, the dispatcher has to coordinate its actions with the spool work process in charge.  This prevents the following requests from being dispatched to other spool work processes until the work process in charge confirms that the request has been accepted in the internal spool request queue.  The dispatcher uses its standard reply message mechanism for work processes to send this reply message.  When processing of a spool work process request is finished or if a spool work process fails, the dispatcher allows spool requests to be sent to other spool work processes.   Reserving Work Processes

Output requests are only guaranteed to be transferred to the host spool system in order they were generated if a request is processed by a single work process in the R/3 spool system.  As soon as several work processes process requests in parallel, some requests may overtake others if the transfer to the host spool system itself is not synchronized explicitly.  This procedure would be rather complicated because the order of data transferred by the processes to the host spool system would have to be coordinated.  Also, coordinating this would lock work processes so that no further requests could be processed.  Since it only makes sense to have the requests of one output device processed in the order they were generated, coordinating this would obstruct the processing of output requests of other devices. That is why this coordination is implicit. Only one spool work process processes the requests of one device.  Other spool work processes can therefore process requests of other devices.

To accomplish this coordination, in addition to the spool request queue, which is used globally for the spool service on one instance, there are several request queues for the spool work processes which contain requests for only one device respectively.  Work processes that do not process any output requests are initially not assigned to any output request.  The dispatcher distributes requests among these work processes.  When a work process starts to process a request, it is implicitly assigned to the device for which the request will be sent. Every work process that wants to start processing an output request checks if another request for the same output device is already being processed by another work process. In that case the request is not processed directly, but is put into the request queue of the spool work process responsible for that specific device.  This type of reserved work process processes all the requests in his queue one after another. After that it is not assigned to a particular output device anymore and can accept requests from the dispatcher again.

Only idle spool work processes can accept requests from the spool request queue and put them into the work process-specific queues.  If all spool work processes are processing requests, requests are only distributed to work processes after the first work process has finished processing its queue.  This work process can process either an administrative request or an output request.

The procedure described above is used to dynamically reserve spool work processes according to the amount of requests.  All devices can be used even if the spool server is responsible for more devices than spool work processes are available.   Output Devices Printing Requests in the Order they Were Generated

To guarantee that the output requests of a device are printed in the order they were generated, they have to be exclusively processed by one work process.  The load on a spool server cannot be reduced by distributing onto more work processes. You cannot process output requests for a single R/3 device faster by setting up more spool work processes.  Increasing the number only makes sense if the spool server is overloaded because there are too many output requests for all the output devices served by this server.

The load can be better distributed (like in the dialogue service) if new output requests can be processed by all spool work processes without any restriction.  When you configure an output device, you can decide if the output requests for this device are to be printed in the order they were generated. Only if this option is not selected, the output requests are processed in parallel and load problems are thereby solved.

3   Flexible Configuration of the Server Architecture

SAP has been recommending classifying R/3 spool servers and output devices according to their usage.  As of Release 4.0, this classification is possible in R/3 itself, so the system can check the configuration automatically.

By using logical servers you can arrange the server architecture of the spool system flexibly and independent of the system.  Output requests are forwarded via logical servers to real spool servers.  Moreover, you can configure alternate servers in order to have output requests forwarded to the R/3 servers that are active at the moment.  If you address a logical server, the system automatically maps this server to a real, active server according to the mapping configuration.  The real server is then used instead of the logical server.

3.1   Classifying Spool Servers and Output Devices

Problems when processing the requests of an output device can directly obstruct processing requests of other devices if both output devices are served by the same spool server.  The same applies to processing very large requests.  Long processing times and time-outs due to connection problems might occupy a spool work process for a long time so it cannot be used to process other requests which are shorter or more important.

To prevent or at least control output devices from impeding each other during request processing, or , distribute output devices among different servers, SAP recommends classifying output devices according to the following criteria:

  • Production print

           Printers that are necessary for smooth production operations (e.g. printing documents) should be run locally.  This helps you avoid connection problems during data transfer, due to network or availability problems.  Also, do not send  long-running jobs to productive printers. Do not use desktop printers for productive printing because these printers or the computers they are connected to are often switched off.

  • High volume print

           High volume printers output very large requests (e.g. cost center lists).  Solely because of their processing time all other output is affected, regardless of connection or availability problems.

  • Desktop printing

           Desktop printers in the work place (e.g. for SAP office documents) or the computers they are connected to are often switched off.  This can affect all other output.  SAP recommends classifying those printers as desktop printers that do not output any documents of importance for production operations of your enterprise.

  • Test printing

           This includes output devices that are used to test new device types or new configurations.  During testing there is a lot of output and errors might occur frequently.

Beginning with Release 4.0, you can assign both output devices and servers to these classes.  During configuration or when the spool system checks the installation, there will be warning messages if an output device belongs to a different class than the server it is assigned to. Classification is optional; there is no check if devices and servers are not classified.

3.2   Logical Servers

Logical servers are used when configuring the spool system instead of regular R/3 servers.  In Transaction SPAD, you can use the name of a logical server whenever spool server names are used.  You can define logical servers in SPAD.

Real servers are potentially active R/3 instances that operate on one computer and to which users can log on.  They can have the status active (the instance is running and operating) or inactive (the instance is not available at the moment).

In contrast to a real server, a logical server itself is never active. It is mapped to another server that can be a logical or a real server. There must always be a real server at the end of each mapping tree. Every logical server can be mapped to one server, whereas a (real or logical) server can be the mapping target of many other logical servers.

If a logical server is to be used to for example, forward an output request to a spool server or to determine an RFC destination, the mapping chain will be traced until a real server is found.  This real server will be the actual target for the original action, which means the logical server will be replaced by the real server found in the mapping chain.

3.2.1   Grouping Output Devices and Rearranging Server Mappings

You can assign output devices directly to a real server or to a logical server.  By using logical servers you can distribute output devices more precisely because several logical servers can be mapped to the same real server. You can change the assignment of output devices to real spool servers in a bunch; you do not have to change the definition of every device.  This is done by centrally changing the mapping of the assigned logical server.  The assignment of output devices to logical servers will not be changed.  A logical server can thus be described as a "device group".  If the names of the logical servers are independent of their respective systems the grouping will also be system independent.

If you assign output devices to logical servers, you can for example, make a distinction between devices which have to be assigned constantly to a certain server because they can only be run there (e.g. with access method L or C) and devices which can be accessed freely in the network (e.g. access method U or S).  For these purposes you can use different logical servers which are mapped to the same real server.

3.2.2   System-Independent Server Mappings

In contrast to the names of real servers, you can choose the names of logical servers independent of the system.  By doing so, you can install an identical architecture of logical servers in several R/3 Systems. Only the mapping of the lowest layer of the logical servers to real servers is system-specific.

Output devices that are solely assigned to logical servers can be freely transported between different R/3 Systems. By using logical servers and connecting the spool system to the transport and correction system, you can transport whole configurations from one R/3 System to another.  For example, you can transport definitions of R/3 devices and external OMS (output management systems); no further configuration is necessary.

3.3   Alternate Servers

In addition to logical servers you can also assign attributes to real servers in the spool system.  You can assign an alternate server to each server definition (real or logical) that will be used if a server fails.

You can use an alternate server in two ways:

  • Only if the server which is normally used is stopped.
  • As a regular alternative which is used for load balancing between spool servers if it is not necessary to process requests in order of their creation.

3.3.1   Mapping Mechanism of the Servers

The mapping mechanism of the R/3 spool system consists of logical and alternate servers.  Consequently, there will be the following mapping procedure:

Whenever a (real or logical) server is addressed, it is mapped to a suitable real and - if possible - active server, according to its intended purpose.  Each branch in the mapping and alternate tree is checked recursively. Depending on whether the alternate server is used to substitute a stopped server or for load-balancing, different criteria is used to make a decision. According to the values found in the mapping and the alternate branch, a decision is made as to which branch the output requests will be sent.

3.3.2   Mapping Information

There are three different kinds of mapping relations between logical and real servers:

  • Determining the real server

           At the end of every logical server chain there is always a real server.

  • Determining an active server

           According to the configured server architecture and the current system status, the system  determines an active server for the server that is currently used. This information is used for example, with the callback interface of access method E to determine the target system for RFC calls.  Every active R/3 server can be used as an RFC target, and no spool service is necessary.

  • Determining an active spool server

           According to the configured server architecture and the current system status, the system determines an active server with a spool service for the server that is currently used.  This mapping is used when output requests are created to determine the current spool server for the output device that is to be used.

In the server definition for Transaction SPAD, you can see the mappings of a server according to the current system architecture.

3.3.3   Emergency Replacement Server

If an active server is found in the mapping branch, the output requests are sent to the mapping branch.  If no active server is found, an active server is searched for in the alternate branch.  This search starts with the alternate server of a server.  The same decision process is repeated at every node of the two branches.

With this mapping relation the output device is exclusively assigned to a mapping branch as long as there are no changes in the server architecture of the R/3 System.  If a server fails, the system can automatically change this mapping according to the statically configured alternate servers.

3.3.4   Alternate Server for Load Balancing

If "Allow load balancing" is selected in the definition of a server, you can have output requests distributed among the mapping and the alternate branch.  The decisive factor for selecting the mapping or the alternate branch is the level of usage.  The branch that contains a server with the lowest load is selected.  In this context load is defined by the length of the spool request queue of the spool servers (see 2.1).

With this setup to determine a spool server, the exclusive assignment of an output device to the mapping branch is lost. The output request of a device can be processed by several spool servers simultaneously. However, every output request is still assigned to a single spool server.  An alternate server is only used if a server fails, but not if the load situation has changed.  Several spool servers can therefore be assigned to one output device.

If output requests are distributed in this way, processing requests in the order they were generated is lost, regardless of the number of spool work processes. Load balancing by use of alternate servers does not occur for output devices that were configured to process output requests in the order they were generated , (see  Every alternate server in the server hierarchy assigned to such output devices is only used in case of server failure; the option load balancing is ignored.

4   Queues and Caches

As of Release 4.0, there are several new queues and caches which are kept in the shared memory.  Those caches marked with an asterisk (*) in the following list are being used but do not directly belong to the spool system, or are not described in this chapter because they do not belong in the context of this topic.

If the configuration of one of the objects in the table below is changed in the database, the cache entries are automatically changed on all application servers in the system by Transaction SPAD. Spool work process is no longer necessary for this, since no spool administration message is used.  The message server sends a global ADM (administration) message to all active servers of an R/3 System instead.  This message is then processed by an idle dialog work process.

Cache or Queue              Usage
character set cache *       for the character set conversion within
                            the R/3 System
format cache *              for storing format actions used for list
control cache              for storing print controls of a device type
device cache                for storing device definitions and server
server cache                for storing server definitions and mappings
                            between servers
spool request queue         internal queue in the spool service for
                            storing output requests for further
host spool request list     list of requests transferred to a host spool
                            system which have to be queried but are not
                            yet reported as finished

4.1   Spool Request Queue

The spool request queue (see 2.1) accepts output requests so that there are fewer entries in the dispatcher queue.  These queue entries are used for the global request queue before the spool service as well as for the work process-specific queues.  The entries are transferred from the first queue to the second queue.  After processing they are released again and can accept new requests.

You can configure the size of the request queue with the profile parameter rspo/global_shm/job_list.  The default and minimum value is 50.  The request queue absorbs short-term overloads,  which are output requests that are created faster than they can be processed.  The queue should therefore be large enough to absorb short-term overloads.  If there is an overload for a longer period, the requests are stored in the dispatcher queue.  However, since the size of the dispatcher queue is limited, too, there cannot be an unrestricted number of requests kept in the queue as a whole.  If there is a general (not temporary) overload situation on a spool server, enlarging these two queues is not a solution.  This would be only useful if more output requests were created temporarily than could be processed and however, if there was enough time for processing between such peaks.  If there is a general overload, you can improve throughput by reconfiguring the R/3 System. You can use the following functions for this purpose:

  • Increasing the number of spool work processes for the instance affected (see 2.3).
  • Distributing requests among several spool servers (see 3.3.4)
  • Better distribution of output devices to spool servers
  • Increasing the number of spool servers

4.2   Device Cache

The device cache already existed in earlier releases.  There, it was only used for the spool work process and had to contain all the output devices the relevant spool server was responsible for.  Only requests of those output devices could be processed that could be stored in this buffer.  If the cache was too small output devices could not be served.

As of Release 4.0, the device cache has a new role. It serves as a cache for device definitions and server assignments for all work processes (and also for other servers than spool servers).  Entries are recorded in the cache when they are needed.  However they can be pushed out if there are no free entries left.  For this usage the cache size depends on the number of configured output devices.

If the cache serves spool servers, there is another usage for the cache. In order to conduct specific querying at the host spool level, the spool service has to register for which output devices requests exist in the host spool system that have not been reported as processed.  For this purpose, entries in the device cache are used. However, these entries will be set and fixed until all the requests of a device have been reported as finished.  The number of entries in the cache must be at least as large as the number of devices that are maximally used.  Even if a spool service is potentially responsible for a very large number of devices, a smaller cache can be used because this number of devices will never be used at the same time (solely for performance reasons).

If the cache is too small, requests of other devices cannot be queried at the host spool system as long as there are still old devices fixed in the cache.  However, no errors will occur, and instead these requests are automatically reported to the host spool as finished after they have been successfully transferred.  No explicit querying is necessary anymore.  This emergency operation continues until entries in the cache can be pushed out .

The cache size is defined with the profile parameter rspo/global_shm/printer_list.  The default and minimum value is 150. The chosen value determines the maximum number of fixable cache entries. Since the device cache is also used by other work processes while output devices are accessed when spool requests are created, not all entries can be fixed so that the spool service can still operate smoothly.  That is why the minimum value is added to the selected size. However, in order to maintain the cache functionality these additional entries cannot be fixed.

4.3   Host Spool Request List

Every spool work process can transfer requests to the host spool system and is able to query the status of requests.  The list of those requests that are to be queried but not yet reported as finished has to be kept in the shared memory in order to reduce database access on status information. The host spool cache is used for this purpose.  The number of requests in the host spool system that can be administered by the spool service is limited by the size of the host spool list.  The status of other requests that are transferred to a host spool system cannot be queried.  These requests are reported as finished after they have been successfully transferred to the host pool system within R/3 so that no entry in the host spool cache is needed.

The size of the cache is defined with the profile parameter rspo/global_shm/hostspool_list.  By default, the size is ten times the size of the spool request queue; the minimum value is 500.  If you select too small a value, it is corrected automatically.

4.4   Server Cache

To map server names to active servers efficiently, use a server cache. This minimizes selects on the database and relations between individual entries and the current server configuration are directly listed.

If there are no free entries left, cache entries are added to the shared memory.  The cache size is configured according to the configuration in the R/3 System.  The maximum cache size is limited by the size of the underlying shared memory area.  The size of this area is defined with the profile parameter rspo/global_shm/memory.  The default value is 100.000 bytes.  However, this area is used by other system components (e.g. for storing the security parameters of a SAPlpd device).  The initial cache size is defined with the profile parameter rspo/global_shm/server_list.  The default value is 100.

Affected Releases
Software Component Release From Release To Release And subsequent

Related Notes
692486Print requests are not output with a high system load
430657Logical spool server: SPO processes completed
412065Incorrect output sequence of output requests
351492Setting up frontend printing as of Release 4.6B
65109Long delays when printing during overload