Monday, April 23, 2012

Medium Access Control (MAC) Protocol


The MAC layer provides services through SAPs to the upper layer as all other sublayers of layer 2. The layer above MAC is the RLC layer; the lower layer is the physical layer which provides services to the MAC. In the case of the MAC, SAPs to the RLC layer are logical channels. Logical channels are used by higher layers to differentiate between logical connections which may use different metrics, for example, in terms of quality or delay, and so on. Furthermore, logical channels are used to distinguish control plane connections, either CCCHs or DCCHs, from user plane connections (DTCHs).
Services provided by the physical layer to the MAC layer are granted via another type of SAP. SAPs between the MAC and the physical layer are transport channels. Transport channels match data units to physical channels in which data is supposed to be transmitted. One exception is the PCH which is multiplexed into the PDSCH identified with the P-RNTI =0xFFFE.
Multiplexing of data units from logical channels to transport channels is one of the tasks of the MAC layer. Logical channels are differentiated with LCIDs. Tables 1 and 2 show the defined LCIDs and their values for DL and UL respectively. A CCCH always has LCID =0. Other UE dedicated channels start with LCID = 1.

Table 1: Values of LCID for DL-SCH. 
Index
LCID values
00000
CCCH
00001–01010
Identity of the logical channel
01011–11011
Reserved
11100
UE contention resolution identity
11101
Timing advance command
11110
DRX command
11111
Padding

Table 1.21: Values of LCID for UL-SCH. 
Index
LCID values
00000
CCCH
00001–01010
Identity of the logical channel
01011–11001
Reserved
11010
Power headroom report
11011
C-RNTI
11100
Truncated BSR
11101
Short BSR
11110
Long BSR
11111
Padding
A MAC PDU consists of a MAC payload part and a MAC header part. The MAC payload conveys multiple units of MAC control elements and MAC SDUs from higher layers. Therefore, the MAC header is also divided into sub-headers depending on the units carried in the MAC payload as MAC sub-headers describe the MAC payload units. There are various possible combinations of MAC control elements, MAC SDUs, and MAC padding derivatives. An example of a MAC PDU with a combination of MAC sub-headers, MAC control elements, and MAC SDUs in the payload section is depicted in Figure 1.


Figure 1: Example of MAC PDU consisting of MAC header, MAC control elements, MAC SDUs, and padding (TS36.321). Reproduced with permission from © 3GPP
Logical channels which are multiplexed to transport channels are prioritized by the scheduling algorithm. The scheduling algorithm decides what to schedule on which physical resources as described in detail for DL scheduling. There is only one MAC entity per UE; thus, the UL within the UE has one MAC entity and the eNB executes multiple parallel MAC entities in the DL direction in case the eNB has to handle multiple UEs.
The MAC layer implements a soft combining N -process stop-and-wait FEC and detection mechanism, or HARQ (Hybrid Automatic Repeat Request). Transport blocks are protected with a FEC algorithm known as turbo codes. Soft combining means not correctly decoded blocks are not acknowledged in order to conduct a retransmission, but the previous received not decoded block is held in a soft buffer to be recombined with the new retransmission. This process of soft combining two or more receptions increases the chance that the last received retransmission can be decoded error-free.

Wednesday, April 18, 2012

Stream Control Transmission Protocol (SCTP)



Originally the SCTP was defined as a transport protocol for SS7 messages to be transmitted over IP networks. As TCP and UDP it is seen as a layer 4 transport protocol in the ISO OSI model.
The SCTP frames are called chunks. All chunks are associated to a connection that guarantees in-order delivery. However, within the same chunk there might be data blocks of different connections transmitted simultaneously. In addition, it is also possible to send urgent packets "out of order" with a higher priority.
SCTP also supports multihoming scenarios where one host owns multiple valid IP addresses.
Besides the data streams, SCTP frequently sends heartbeat messages to test the state of connection.
How SCTP works will be demonstrated by means of an example. Figure 1 shows the message flow required to transport the NAS signaling message Attach Request from the eNB to MME across the S1 interface.

 
Figure 1: SCTP example
After setting up a RRC connection on the Uu interface between the UE and the eNB, the UE sends the attach request message. When the appropriate RRC transport container is received by the eNB, the establishment of a dedicated SCTP stream on the S1 interface as shown in Figure 2 is triggered.
The establishment of the SCTP stream starts with a SCTP initiation message. It will always be sent by the eNB in the case of the attach procedure, because the RRC connection is established earlier and the request to transport the NAS message triggers the request to have a S1 connection. The SCTP initiation message contains the IP addresses of both the eNB and MME. The individual subscriber for which this connection is established is represented by a unique pair of SCTP source port and destination port numbers.
The SCTP initiation needs to be acknowledged by the peer SCTP entity in the MME. In the next step a SCTP cookie echo message is sent and acknowledged by Cookie Echo ACK. In the protocol world, this is called a heartbeat procedure. Such a procedure periodically checks the availability and function of the active connection. Similar functions with other message names are found, for example, in SS7 SCCP Inactivity Test or GTP Echo Request/Response.
On SCTP higher layer messages are transported using SCTP datagram (SCTP DTGR) packets. Each SCTP DTGR contains a Transaction Sequence Number (TSN) in addition to source and destination address information. This TSN will later be used by the peer entity to acknowledge the successful reception of the DTGR by sending an SCTP selective ACK message on S1 that confirms error-free reception of the SCTP DTGR that carried the attach request message. Further S1AP and NAS messages of this connection will be transported in the same way and the Cookie Echo/Cookie Echo ACKs will be sent periodically as long as the connection remains active.
If the S1 signaling transport layer SCTP has problems in offering proper functionality, there will be no signaling transport on S1 if the problems are located in the eNB SCTP entity. If the MME suffers from congestion or protocol errors on the SCTP level as shown in Figure 1.83, the expected selective ACK messages will be missing (maybe not sent at all, maybe sent with a TSN out of the expected range). This malfunction may be detected by a NACK Cookie Echo and as a result the connection will be terminated. Or the attach accept message expected to be received by the UE will be missed. The missing attach accept message will be recognized by the UE where a timer is guarding the NAS procedures. After the guard timer expires on the UE side the attach request message will be repeatedly sent up to n times (the counter value of n is configurable and typically signaled on the broadcast channel SIBs; the default value recommended by 3GPP is n = 5). If neither an attach request message nor an attach reject message is received by the UE the handset will go back to IDLE when the maximum number of attach request repetitions has been sent.

 
Figure 2: Failure in SCTP signaling transport

Sunday, April 15, 2012

Internet Protocol (IPv4/IPv6)



The IP frame is called the datagram and there exist two main versions of the IP: IPv4 and IPv6.

IPv4

The IP header has a minimum size of 20 bytes (if no options are used) and a maximum size of 64 bytes (including options and padding bits). Due to a set of different options that can be appended to the IPv4 header, these headers can become very large.
The included information elements shown in Figure 1 are:
  • Version: IP protocol version, here IPv4.
  • Internet header length (IHL length): The length of the header.
  • Type of Service: The QoS parameters for IP.
  • Total Length: The length of the IP frame including header and payload field.
  • Identification, Fragment Offset: Both used in case of fragmentation/reassembly.
  • Time to Live: A hop counter to prevent circular routing.
  • Protocol: Indicates the higher layer protocol that uses IP as the transport layer; typical examples are ICMP, TCP, UDP.
  • Source Address: IP address of the sender of the datagram.
  • Destination Address: IP address of the receiver of the datagram.
  • Options: For example, the timestamp of each router that the IP packet passed.
  • Padding: Fill bits to align the header to a multiple of 32 bits.
 
Figure 1: IP datagram structure
Since the maximum packet size of an IP datagram can vary from one local network to the next, the IP is equipped with fragmentation/reassembly functionality that allows the transmission of larger frames in series of smaller portions. Figure 2 shows an example where a frame with 1600 bytes of data is fragmented into two smaller frames with 1480 and 120 bytes of data each. Fragmented frames do all have the same frame ID (in the example: 1234). As long as more fragments are following the first one, the fragmentation flag MF is set to "1." The last frame in a series of fragments has fragmentation flag MF = "0," but a fragmentation offset that is required for proper reassembly on the receiver side.

 
Figure 2: IP fragmentation
IP fragmentation (Figure 2) may be found in the user plane data streams, but should be avoided on interfaces that carry 3GPP signaling.
IPv4 addresses are typically written in the so-called dotted decimal notation, for example, 195.24.1.2. There are 32 bits (= 4 bytes) reserved for the address fields in the IP datagram. Each number in the dotted decimal format represents the decimal value of a single byte. The dot "." is used as the separator between the different bytes of the IP address. Figure 3 shows a sample address in binary, hexadecimal, and decimal dotted notation format.

 
Figure 3: Example of IPv4 address format

 

IPv6

The most important improvements that come with IPv6 are:
  • A larger number of possible address values become available. In IPv4 the number of addresses is limited to 32 bits, which means in turn that 232 (4.3 billion = 4.3 × 109) possible values can be addressed. IPv6 provides space for 2128 (=3.4 × 1038) possible address values. This is an improvement by a factor of 296 and reached by a restructuring of the IP header. In the IPv6 header shown in Figure 4, 128 bits (16 bytes) is reserved for source and destination addresses. The larger address ranges available for IPv6 will also allow more direct end-to-end packet routing and, hence, less address translation in network nodes is required and the packet routing in the overall network is expected to be faster and more efficient.

     
    Figure 4: IPv6 header format
  • The automatic configuration of dynamically assigned IP addresses is improved and in turn legacy procedures like DHCP (Dynamic Host Configuration Protocol) become unnecessary.
  • IPv6 supports Mobile IP, simplifies renumbering (change of dynamically assigned IP addresses), and allows multihoming of subscribers. The purpose of multihoming is to increase the reliability of Internet connections by using two different Internet service providers simultaneously. If the access to one of the providers is interrupted a redirection of packets via the second connection is possible. Mobile IP means that the subscriber always gets the same IP address assigned, no matter if working at home or traveling around.
  • IPsec is integrated into IPv6 to achieve a higher security of IP data transmission, while back in IPv4 no security functions were provided at all.
  • All in all, the basic header of IPv6 has a simpler structure compared to the header of IPv4. Although the overall header size is larger than in IPv4 (40 bytes, most of them occupied by the longer IP addresses), there are less basic header fields.
  • For the version, the decimal number 6 is encoded as binary bit sequence "0110."
  • The IPv6 traffic class indicates the packet priority and should not be mistaken for the traffic class QoS element introduced in 3GPP standards that classifies the throughput sensitivity and delay sensitivity of application services. IPv6 traffic class priority values subdivide into two ranges: traffic, where the source provides congestion control, and non-congestion control traffic.
  • The flow label is used for QoS management and encoded in 20 bits. Packets having the same flow label value will be treated with the same priority and reliability. This is important for the routing of packets that contain real-time service data.
  • The payload length indicates the size of the payload in octets and is encoded in 16 bits. When cleared to zero, the option is a "Jumbo Payload" (hop by hop). The size of the basic header is not counted by the payload length, but the optional header extensions are included. So payload length + 40 bytes (of basic header) = total length of the IPv6 packet.
  • The next header information element specifies the next upper layer protocol of the transported payload such as UDP and TCP. The values are compatible with those specified for the IPv4 protocol field (8 bits). The next header information can also point to optional extension headers. In this case the upper layer payload protocol is not indicated by this field.
  • The hop limit field (8 bits) indicates the maximum number of routers that are allowed to be involved in routing an IPv6 packet. It replaces the time to live field of IPv4. If the hop limit reaches the value "zero" the packet will be discarded by the router.
  • Source and destination addresses, 128 bits each, represent the sender and receiver of the IPv6 datagram.
IPv6 addresses are normally written as eight groups of 16 bits, where each group is separated by a colon (:). For example, 2001:0db8:85a3:0000:0000:8a2e:0370:7334 is a valid IPv6 address.
To shorten the writing and presentation of addresses, several simplifications to the notation are permitted. Any leading zeros in a group may be omitted; thus, the given example becomes: 2001:db8:85a3:0:0:8a2e:370:7334.
Also, one or any number of consecutive groups of value 0 may be replaced with two colons (::): 2001:db8:85a3::8a2e:370:7334.
It is possible to use IPv6 addresses in the URL notation format. In this case the IPv6 address information is enclosed in square brackets:

The brackets prevent that part of the IPv6 address being misinterpreted as port number information. A URL including IPv6 address and port number looks like this:

Wednesday, April 11, 2012

Ethernet | Protocol Functions, Encoding, Basic Messages, and Information Elements



Ethernet is the typical transport layer protocol in IP networks. It is designed to transmit packets from a sender to a receiver, both identified on behalf of an address information element.
According to this limited functionality, Ethernet has a very small header. The header field (Figure 1) contains only the information elements:
  • Destination address.
  • Source address.
  • Ethernet type.
 
Figure 1: Ethernet header example
Ethernet type is similar to a SAPI (Service Access Point Identifier). It contains information about which higher layer protocol information is transported by Ethernet frames.
The Ethernet addresses, often called MAC addresses (but with nothing in common with RLC/MAC!), consist of 6 bytes. These MAC addresses are fixed hardware addresses and, due to a defined numbering scheme, each address is unique worldwide.
If IP data is to be transmitted using Ethernet the hardware MAC address of the receiver of IP packets is unknown when the connection starts. Only the target IP address is known. However, since Ethernet is the lowest layer of the connection there must be a source and a destination MAC address included in each header. In other words, for each sender IP address there is an appropriate sender hardware address, and for each destination IP address there must be an appropriate target hardware address.
The target hardware address that is related to the target IP address is requested by the Address Resolution Protocol (ARP). Its sister protocol, the Reverse Address Resolution Protocol (RARP), can be used to find the target IP address (or, in terms of ARP/RARP, the target protocol address) to a known MAC address.
The address resolution procedure consists of two steps:
  1. ARP request (req) message with Target Hardware Address = "0" is sent to all(!) IP clients in the network.
  2. The client that has the target protocol address as set in the ARP req message sends ARP Replay (rpl). The sender hardware address in ARP rpl is the Ethernet MAC address related to the destination IP address that the sender of the ARP req is looking for. An example of the Ethernet address resolution procedure is shown in Figure 2.

 
Figure 2: Ethernet address resolution

Saturday, April 7, 2012

S1 – Control/User Plane



On the S1 reference point the physical layer L1 will in most cases be realized by Gigabit Ethernet cables. L2 in this case will be Ethernet. On top of Ethernet we find IP, but used as a transport protocol between two network nodes: eNB and MME. This lower layer IP does not represent the user plane frames.
Instead, the user plane IP frames (higher layer IP) are carried by the GTP Tunneling Packet Data Unit (T-PDU). The GTP is responsible for the transport of payload frames through the IP tunnels on S1-U. The transport layer for GTP-U is the User Datagram Protocol (UDP). As IP this protocol may be found twice in the user plane stack: lower UDP for transport between the eNB and MME and higher UDP (not shown in Figure 1) that is transparently routed through the mobile network as the transport protocol for real-time application data. The higher layer IP on top of GTP-U as well as all application data on top of this higher layer IP are identical with the user plane information.

 
Figure 1: Protocol stack S1 control/user plane
On the control plane side, the Streaming Control Transport Protocol (SCTP) provides reliable transport functionality for the very important signaling messages. S1AP is the communication expression between MME and S-GW while NAS

Monday, February 20, 2012

LTE Network Protocol Architecture


Uu – Control/User Plane

The protocol stack used on radio interface Uu is shown in Figure 1. The physical layer in this stack is represented by OFDM in the DL and SC-FDMA in the UL. Then we see the MAC protocol that is responsible for mapping the transport channels onto the physical channels, but also for such important tasks as packet scheduling and timing advance control. RLC provides reliable transport services and can be used to segment/reassemble large frames. The main purpose of PDCP is the compression of larger IP headers as well as ciphering of user plane data and integrity protection of both user plane and control plane data.

 
Figure 1: Protocol stack LTE Uu interface
On top of PDCP the stack is split into the user plane and control plane parts. On the control plane side we see RRC protocol, that is, the expression for the communication between the UE and eNB. RRC provides all the necessary functions to set up, maintain, and release a radio connection for a particular subscriber. 
RRC also serves as a transport protocol for NAS signaling messages. NAS is the expression for the communication between the UE and MME in which MME represents the core network.
On the user plane side we see IP as the transport layer for end-to-end applications. On the Uu stack the IP is always end-to-end IP, which means that all these IP packets are transparently routed, often tunneled through the mobile network. The user plane IP frames we see on Uu are the same IP frames that can be monitored at SGi reference points before or behind the PDN-GW.
The IP version can be Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6). In the case of VPN (Virtual Private Network) traffic, IPsec will be used.
The applications on top of IP in the user plane stack are all protocols of the TCP/IP suite, such as the File Transfer Protocol (FTP), HTTP (web-browsing), and POP3/SMTP (for e-mail), but also Real-Time Transport Protocol (RTP) and SIP for real-time services like VoIP.

Thursday, February 16, 2012

Initial UE Radio Access



Cell search is a procedure for synchronizing time and frequency to a base station sector. Additionally, cell search and synchronization include deriving basic information of the target cell.
LTE defines a hierarchical cell search as it is deployed with WCDMA UMTS. PSS and SSS provide radio frame and slot synchronization, as well as information like the duplex mode TDD or FDD and the physical layer group and c-ID.
UEs synchronizing to a new LTE cell start searching for a PSS which is a Zadoff–Chu sequence. Three sequences are defined indicating the physical cell group ID. Three physical layer c-ID groups with 168 physical layer c-IDs each are defined. After successfully detecting the PSS with its physical layer c-ID group and slot timing, the SSS is decoded which is broadcasted one OFDM symbol prior to the PSS. Now the UE retrieved DL slot, radio frame timing and frequency synchronization. With decoding successfully PSS and SSS, it obtained as well the complete 9-bit physical layer c-ID together with the radio frame type (either type 1 for FDD or type 2 for TDD) and the CP length.
Figure 1 illustrates the initial steps of cell synchronization and access. After synchronization, the UE is ready to detect and decode the PBCH in order to derive the system bandwidth, PHICH configuration, and the current System Frame Number (SFN). Other common system information now needs to be retrieved from the DL-SCH. SIBs are scheduled on regular shared channel resources by using a special C-RNTI = 0xFFFF. SIBs provide general system configuration information like UL configuration and random access configuration.

 
Figure 1: Initial cell access with level of retrieved information

Sunday, February 12, 2012

Channel Mapping and Multiplexing



Besides physical and transport channels, LTE also defines logical channels. Logical channels are multiplexed to transport channels within the MAC layer. Logical channels map different content connections to transport channels, like CCCHs to multiple UEs or DCCHs to a specific UE or dedicated transport channels carrying higher layer application data.
Logical channels are addressed with a logical channel ID. The logical ID is a field within the MAC header PDU. Logical channels are multiplexed by using logical channel IDs to transport channels specifying where the information should be transmitted. Finally, transport channels are transferred with physical channels as a service provided by the physical channel.
Figures 1 and 2 show the above-described channel architecture from the basic physical channels via transport channels to logical channels bearing higher layer messages for DL and UL respectively.

 
Figure 1: Downlink channel mapping and multiplexing from logical channels via transport channels to physical channels
 
Figure 2: Uplink channel mapping and multiplexing from logical channels via transport channels to physical channels
Two basic sets of logical channels are defined:
  • Control channels: CCCH and DCCHs.
  • Traffic channels: CCCH and DTCHs.
The nature of common channels is such that no specific UE is addressed, but the information is either general for all cell-wide subscribers or a message from a UE which has not yet established a dedicated control/traffic channel. A typical example of a CCCH is the broadcast of SIBs.
Traffic channels carry user plane protocols like the Packet Data Convergence Protocol (PDCP) and application IP packets, while control channels carry control plane protocols as RRC and NAS.
  • DL logical channels:
    • – Broadcast Control Channel (BCCH).
    • – Paging Control Channel (PCCH).
    • – Common Control Channel (CCCH).
    • – Dedicated Control Channel (DCCH).
    • – Dedicated Traffic Channel (DTCH).
  • UL logical channels:
    • – Common Control Channel (CCCH).
    • – Dedicated Control Channel (DCCH).
    • – Dedicated Traffic Channel (DTCH).

Thursday, February 9, 2012

Transport Channels in LTE



The physical layer provides a transport service for MAC PDUs. This service is accessed by transport channels. Most transport channels are directly mapped to physical channels. Thus, in other words, transport channels are the gateway to physical channels and a selection for MAC PDUs where they are to be transmitted.
The following DL transport channel types are defined:
  • Broadcast Channel (BCH):
    • – Uses a static transport format and has the requirement that all UEs within the cell have to receive its information error-free. The reception of the BCH is mandatory for accessing any service of a cell.
  • DL-SCH:
    • – Carries all semi-static broadcast information (SIB) and all UE-specific traffic channels.
    • – DL-SCH is secured with HARQ algorithms.
    • – Efficiency is realized with AMC link adaptation.
    • – Various TMs are defined to meet different environment scenarios to increase efficiency in respect of current conditions.
    • – DRX is available in order to increase handset operating time.
    • – Makes use of spatial algorithms like beamforming or MIMO.
  • Paging Channel (PCH):
    • – Needs to be received in complete cell coverage area.
    • – Supports DRX in order to increase battery operating cycle.
    • – Dynamically allocated via own physical identifier (P-RNTI).
  • Multicast Channel (MCH):
    • – Broadcast to entire cell coverage area.
    • – MBMS transmission with use of multiple cells.
A designated DL control channel is not defined as the PDCCH is used for physical channel control only. All the higher layer control plane is transmitted via the DL-SCH.
Defined UL transport channel types are shown in the following items:
  • UL-SCH:
    • – UL-SCH is secured with HARQ algorithms.
    • – Fully dynamic and semi-static resource allocation schemes.
    • – Can make use of multi-user MIMO (UL "virtual" MIMO).
    • – Uses dynamic link adaptation like AMC.
  • RACH:
    • – Accessible without UL synchronization.
    • – Collision-based and collision-free operating modes.
    • – Various modes depending on cell size and interference.
As in the DL direction, no UL control transport channel is defined as all the higher layer control plane is transmitted on the UL-SCH. The PUCCH is a control channel used by the physical layer only.

Monday, February 6, 2012

Link Adaptation in LTE



Mobile wireless reception conditions vary greatly over frequency and time as described in Section 1.8. In order to cope with these circumstances and guarantee best possible QoS, a procedure is implemented known as Adaptive Modulation and Coding (AMC). AMC controls and changes transmission parameters to achieve a defined Transport Block Error Rate (BLER) of below 10%, in order to keep retransmissions in a suitable range. This is done by adapting the modulation scheme, in LTE on the shared channels between QPSK and 64QAM, and the Forward Error Correction (FEC) coding rate.
Different modulation schemes make the bit detection more robust against noise and other distortion caused by the wireless channel. Figure 1 shows the applied modulation scheme for the LTE shared channels. Most robust transmission is achieved by mapping just 2 bits to each modulation symbol as seen with QPSK, resulting in four stages. A large distance between modulation points as seen with QPSK allows a higher probability of the correct decision at the receiver even with noisy reception conditions. Both 16QAM and 64QAM map 4 and 6 bits respectively to one modulation symbol used with better wireless channel conditions to achieve a higher data throughput. It is to find the best compromise between the modulation scheme and code rate for a given channel quality. LTE defines a list of MCS combinations and just signals an MCS index.

 
Figure 1: Different QAM schemes used with LTE and the number of bits mapped to each scheme
The data modulated with the different modulation schemes to subcarriers needs to be protected against transmission errors. LTE defines a turbo de-/encoder with trellis termination of a native code rate of one-third. The turbo coder adds redundancy bits to the data, which makes it possible to correct some bit errors. The code rate is a fraction of source data rate to resulting protected data rate; thus, a code rate of one-third encodes 1 bit into 3 bits. Other code rates are needed in order to optimize the trade-off between protection and efficiency. This is done by puncturing the native coded bit stream to a higher (less protection) code rate by deterministically leaving out coded bits, or by deterministically repeating coded bits if a smaller code rate is desired (more protection).
Additionally, one parameter being controlled is the UL transmit power. UL power control is implemented to deal with the near–far effect. Figure 2 shows an UL scenario with a near–far effect compared to a DL scenario without power differences between user signals as they are equally attenuated because the mix of the signal is transmitted from one position (eNB). This occurs when a user is close to the base station (near) and another user is far away from the base station, introducing a higher power path loss which leads to a lower receive power of the signal of the cell edge user. All UL receive signals should have equal power in order to have the same analog-to-digital converter saturation of each signal to reduce the quantization noise of users with low received signals, reducing inter-subcarrier interference between the users. This happens with imperfect UL synchronization within real-life scenarios.

 
Figure 2: Near–far effect occurring in uplink direction, compared to equal signal strength reception in downlink
UL TPC commands are sent via designated DCI formats 3 and 3A. DCI 3 and 3A are differential power control commands for PUCCH and PUSCH transmission in steps of decibels. DCI 3 is a 2bit assignment as opposed to DCI 3A which is a single bit command. These dedicated DCIs with TPC commands are only used when there is no data to be transmitted to the UE; otherwise, the TPC command is transmitted embedded in other control information on the PDCCH for this UE. An initial 3-bit TPC command is embedded in the RAR message. The different TPC commands are listed in Tables 1 and 2.
Table 1: Mapping of TPC command field in DCI format 1A/1B/1D/1/2A/2/3 to δPUCCH values. Reproduced with permission from © 3GPP 
TPC command field in DCI format 1A/1B/1D/1/2A/2/3
δPUCCH (dB)
0
-1
1
0
2
0
3
3
Table 2: Mapping of TPC command field in DCI format 3A to δPUCCH values. Reproduced with permission from © 3GPP 
TPC command field in DCI format 3A
OPUCCH (dB)
0
-1
1
1
UEs report their received channel quality to the eNB by transmitting a CQI value. The CQI value represents either a wideband receive quality as a scalar or a more detailed report about frequency sections (sub-bands) as a vector. CQI reports are transmitted periodically or aperiodically configured by higher layers. Sub-band CQI reports indicate the receive quality of each sub-band relative to the wideband average with four steps: worse, equal, better, and much better. The UE reports the best M sub-bands compared to the average channel quality with the best M method as depicted in Figure 3.

 
Figure 3: CQI illustration with sub-bands and best M reporting. Reproduced with permission from Nomor
The UE should derive a MCS scheme from the above measurement information, which is indexing 1 MCS out of 16 to suit the target BLER of below 10%. This enables differentiation between high- and low-cost handsets which use more or less expensive RF hardware and/or a more sophisticated IQ signal processing engine.
Recapitulating, LTE link adaptation uses various stacked link adaptation techniques, securing transmission, or in order to make it more effective using different control loop delays regarding the process's dimension. The following items summarize the LTE link adaptation functions with their responsiveness:

  • Adaptive frequency-selective scheduling: Assign frequency resource to UEs on a 1 ms basis, which provides each UE with the individual best reception quality.
  • AMC: Obtain the most efficient modulation and FEC code rate in order to balance retransmissions vs. maximization of throughput.
  • HARQ: Multiple retransmission process using prior transmission to increase the correct decoding probability.
  • TPC: Provides UL power control in order to minimize multiple user interference.

Image from book