Friday, August 21, 2009

Code Allocation in UMTS

Code Allocation:

Channelization codes, also called Spreading codes, satisfy two purpose: first, to spread the signal energy evenly over the bandwidth below the noise level ; and second, to separate the transmissions from a single source. These are based on the Orthogonal Variable Spreading Factor (OVSF) technique. Orthogonal codes are the codes that have the property that any two codes in the family, except for those that are connected to it and to the right, when multiplied together bitwise and the results of these multiplication are summed, the result is zero. That is, in mathematical way, codes in the family correlated completely with themselves and have zero crosscorrelation with any of the other codes. However, the zero crosscorrelation property of orthogonal codes works only if the codes are time aligned.

In Uplink, channelization codes are used to distinguish data and control channels from the same UE.
In Downlink, channelization codes are used to distinguish signals for different channels and users within a cell.

Scrambling codes, also called PN scrambling codes, are used to separate users and different base stations. They are used on top of spreading and it does not change the signal bandwidth. Also, scrambling codes are found to have better crosscorrelation property as compared to channelization codes even when codes are not time aligned. PN sequence or Pseudorandom number sequence is a sequence of numbers, algorithmically generated, that are distributed evenly throughout the number space but with no discernible pattern to their distribution.

In Uplink, scrambling codes are used to distinguish UE terminals.
In Downlink, scrambling codes are used to distinguish different cells.

ImportantInformation:
- Each signal is spread with the spreading code, i.e. channelization code x scrambling code
- Uplink codes are not cell specific. Channelization Code is picked by mobile and Scrambling code assigned by RNC. They can be decoded anywhere
- Downlink code is cell specific and UE must decode each of them individually

Wednesday, August 19, 2009

Power Control in UMTS



Power control is necessary to keep the transmitted signal power level under control so as to minimize the interference and keep the quality of signal to a desired level. The main functions are:
1. Closed-loop power control
· Outer-loop power control
- Uplink outer-loop power control
- Downlink outer-loop power control
· Inner-loop power control
- Uplink inner-loop power control
- Downlink inner-loop power control
2. Open-loop power control
· Uplink open-loop power control
· Downlink open-loop power control

Closed-loop power control is the power control mechanism used in UMTS to solve near-far problem, minimize interference and to keep the signal quality to optimum level. Closed-loop power control is used in uplink (UL) as well as downlink (DL). However, the motive in both the cases are different. In uplink, signals from different UEs reach NodeB with different power strength, thus causing the stronger signal blocking the weaker one, resulting in near-far effect. In downlink, there is no near-far effect, but the UEs near the cell-edge or in high interference region may need extra power to overcome the increased other cell interference and weak signal due to Rayleigh fading.

Closed-loop power control can be divided into outer-loop and inner-loop power control. In case of uplink, the RNC manages the outer-loop and Node B manages the inner-loop and for downlink, UE manages the outer-loop and Node B manages the inner-loop.

Inner-loop power control (also called fast closed-loop power control), operates at 1500 times per sec (1.5 kHz) [From where did this value of 1.5 kHz come from? Answer: A UMTS 10 ms frame consists of 15 TPC commands. This results in a power control frequency of 1500 Hz (15/10ms)] and relies on the feedback information from the opposite end of the link (or channel) to maintain the signal to interference (noise) ratio to a target level set by the outer-loop power control. The transmission power is increased or decreased by a certain fixed step size depending on whether the received SIR is below or above the target SIR. Precise power control can lead to optimum use of bandwidth resulting in increase cell capacity.
The UL inner-loop power control lets the UE adjust its output power in accordance with one or more TPC commands received in the downlink direction. Remember the increase and decrease in power is limited by upper and lower bounds as defined in 3GPP TS 25.101. The upper bound, i.e. UE maximum output power, is set depending on the Power class of UE. This can also be set below the maximum capability of the UE through signaling when the link is established. The lower bound, i.e. UE minimum output power defined as the mean power in one timeslot (TS), and shall be less than -50 dB.
The DL inner-loop power control is used to control the transmission power of downlink channels at Node B as per the TPC commands received from UE. However, in some situations Node B may ignore the increase/decrease these TPC commands. For example, in case of congestion (high load scenario), the Node B can ignore the TPC commands from UE.

Outer-loop power control is used to set the target quality value for inner-loop power control, i.e it adjusts the target SIR in Node B which is used during inner-loop power control. Now the question is why do we need to adjust the target SIR? Outer-loop power control tries to keep the quality of a connection to desired value. Too high quality will waste the resources.

Open-loop power control is used to set the initial power of UE (in random access) and downlink channels. The TPC commands used in inner-loop power control are relative, so it needs a starting point and this is defined by open-loop power control. Also, this is useful in setting the power level in case of common shared channels, where it is difficult to send each UE the necessary TPC commands. In case of uplink, UE and broadcasted cell/system parameters are used to set initial access power on RACH. And in case of downlink, the measurement report of UE is used to set the initial power of downlink channels.
The open loop power control tolerance is ±9dB under normal conditions and ±12dB under extreme conditions.

[References: TS 25.214, TS 25.215, Section 7.2.4.8 of TS 25.401]

Friday, August 14, 2009

Why chips are called so?

Chips are code bits used for spreading the desired signal. Chips are called so as they do not carry any useful information and to distinguish them from data (bits).
Technorati Tags: , ,

Thursday, August 13, 2009

What are the main differences between LAPD and LAPDm in GSM?

LAPDm stands for Link Access Procedure on D channel (modified). This is a modified version of LAPD and is optimized for the GSM Air interface.
This is done in order to judiciously use the scarce radio resource. Some parts of LAPD frame are removed to save the resources. Some differences are:
1. In LAPDm, checksum is not used, as channel coding on layer 1 takes care of it(for error identification and correction).
2. In LAPDm, some of layer 2 control frames, viz. SABM and UA frames carry layer 3 information.
3. LAPDm does not support extended header formats.
4. LAPDm frames (one LAPDm frame carry a maximum of 23 octets) are shorter than LAPD frames (one LAPD frame carry a maximum of 260 octets).

Wednesday, July 29, 2009

Why Open-loop power control mechanism does not solve "near-far problem"?

The Open-loop power control mechanism, used in CDMA based systems, requires the transmitiing entity (mobile station) to monitor the received signal strength and channel interference in the downlink and adjust its transmission power accordingly. Now uplink and downlink signals use different frequencies and there is large frequency separation of uplink and downlink bands in UMTS FDD mode (uplink frequency band is 1885–2025 MHz and downlink frequency band is 2110–2200 MHz). As such, uplink and downlink fast fading (on different frequency carriers) are not correlated. The downlink signal may suffer from different sets of diffractions and reflections that uplink signal may not encounter, thereby not giving a correct result. This is the reason that "open-loop power control" can not be used to solve "near-far problem". Usually this mechanism gives correct result only on average. Therefore open loop power control is used mainly to provide initial power setting for the initial access of system (RACH).

The open loop power control tolerance is ±9dB under normal conditions and ±12dB under extreme conditions.


Reference: 3GPP TS 25.101.

Monday, July 27, 2009

What is near-far problem?

Consider that there are 2 mobile stations (MS) transmitting at equal powers, but one is nearer to the base station (BS) compared to the other. The BS will receive more power from the nearer MS and this makes the farther MS difficult to understand. As we know, the signal of one MS is the noise for another MS and vice-versa. So the Signal-to-noise ratio (SNR) for the farther MS is much lower. If the nearer MS transmits a signal that is orders of magnitude higher than the farther MS then the SNR for farther MS may be below detectability threshold and it would seem that the farther MS is not at all transmitting. This situation is called "near-far problem" and is less pronounced in GSM than CDMA-based systems as the MS transmit at different frequencies and timeslots in case of GSM.

To overcome this problem, a power control mechanism is used so as closer MSs are commanded to use less power so that the SNR for all MSs at the BS is roughly the same.

Friday, July 24, 2009

What are the Layer 3 messages exchanged between mobile station and network during a successful voice call in GSM?

RR (Radio Resource)
CHANNEL REQUEST
IMMEDIATE ASSIGNMENT
CIPHERING MODE COMMAND
CIPHERING MODE COMPLETE
CHANNEL RELEASE

MM (Mobility Management)
CM SERVICE REQUEST
CM SERVICE ACCEPT
AUTHENTICATION REQUEST
AUTHENTICATION RESPONSE
IDENTITY REQUEST
IDENTITY RESPONSE
TMSI REALLOCATION COMMAND
TMSI REALLOCATION COMPLETE

CC (Call Control)
SETUP
CALL PROCEEDING
ALERTING
CONNECT
PROGRESS
DISCONNECT
RELEASE
RELEASE COMPLETE

Tuesday, July 21, 2009

What is the difference between synchronized and non-synchronized handovers?

In synchronized handover, the source and the target base stations (BTS) are synchronized to the same system clock. Here synchronization means that the timing of the TDMA frame at the BTSs is the same, i.e. the timeslot zeros from the BTS transmitted are synchronous with the timeslot zeros of the carriers on the set of neighbouring BTSs. However, the frame numbers need not be the same. All timings are to be referenced at the BTS.

So in synchronized handover, a mobile station (MS) is capable of obtaining uplink synchronization to the new cell prior to the cell access in the new cell. The MS is able to calculate the timing advance based on the TA in the source cell and the time difference between the signals received from the source and the target cells.

In a non-synchronized handover, the source and target cell system clocks are different. So the target cell sends "Physical Information" message to the MS with the new TA value.

Synchronized handovers can be used only in intra-BTS handover or inter-BTS handover when the BTSs are sectorized or collocated, in which case the equipment is located closely enough together to allow for fine synchronization in an inexpensive way.

Thursday, July 16, 2009

What is handover? What are the different types of handovers?

A handover is defined as the change of the currently used radio channel (signaling or traffic) to another radio channel during an existing and active connection between a mobile station (MS) and base station (BTS). The handover procedure is always initiated by the network.

There are two criteria to categorize handovers:
  1. What entity is executing the handover, i.e. which is the source BTS and which is the target BTS during handover?
  2. Are the system clocks of the source cell and the target cell of a handover finely synchronized?
The first criterion gives handover types as "intra-cell handover" and "inter-cell handover".

The second criterion gives handover types as "synchronized handover", "non-synchronized handover", "pre-synchronized handover" and "pseudo-synchronized handover". The support of first three is mandatory in the MS. The pseudo-synchronization case is optional in the mobile station and this can be commanded only to a MS that can support it, as indicated in the classmark. The "Synchronization Indication" information element in "Handover Command" message indicates which type of handover is to be performed.

Monday, July 13, 2009

How is Authentication performed in GSM?

AUTHENTICATION Procedure is a challenge-response mechanism. In GSM, authentication serves two purpose:
  • it prevents unauthorized access of network by a Mobile Station (MS). The network checks whether identity provided by MS is acceptable or not.
  • it provides parameters enabling MS to calculate a new ciphering key (this is used during ciphering procedure).
Authentication procedure is always initiated and controlled by the network. The network decides whether or not to use authentication, depending on the context.
The cases where authentication procedure is used are as follows:
  1. a change of subscriber related information element in VLR/HLR (change of VLR on location updating etc.)
  2. an access to service (Mobile originated and terminated call, activation or deactivation of supplementary services)
  3. first network access restart of MSC/VLR
To authenticate MS, the network (MSC) must have information of authentication vector triplets, listed below:
  • RAND: 128-bit Random number
  • SRES: 32-bit Signed Response
  • Kc: 64-bit Ciphering Key
The network should use this information if available, otherwise they should be fetched from HLR/AuC using MAP-AUTHENTICATION-INFO (IMSI) message.

The network sends Authentication Request message to MS. Some of the important points are:
  • Authentication Request is a MM (Mobility Management) message
  • It is carried as DTAP message (no one looks at the contents) over A-interface
  • The contents of this message are: RAND and CKSN. RAND is used by MS to generate SRES and CKSN is Ciphering Key Sequence Number used by MS to map it to a Kc
  • Over Abis-interface, it is carried as Data-Req LAPD Information-frame
  • Over Air-interface, it is carried over signalling channel SDCCH, LAPDm Information-frame.
The MS processes the challenge information in Authentication Request message and sends Authentication Response message to the network.
  • Authentication Response is a MM message
  • The content of this message is SRES
  • Over Abis-interface, it is carried as Data_Ind LAPD Information-frame
  • Over A-interface, it is carried over DTAP message.
The MS using RAND and Ki as input operates on A3 algorithm to give SRES as output and using RAND and Ki as input operates on A8 algorithm to give Kc. Here, Ki is Individual Subscriber Authentication Key. It is a 128-bit number that is paired with an IMSI when the SIM card is created. The Ki is only stored on the SIM card and at the Authentication Center (AuC). The Ki should never be transmitted across the network on any link. These processes are shown in below figures.
The network compares the received SRES with the SRES obtained by AuC to authenticate the user. The A3 and A8 algorithms reside on the SIM card and at the AuC.

Sunday, July 12, 2009

How is contention resolution solved in GSM?

The Random access request message (8-bit "Channel Request" message on RACH) contains information on why the Mobile Station (MS) wishes to establish connection with the network (i.e. 3-bit "Establishment Cause") and a random discriminator (i.e. 5-bit "Random Reference number"). However, it does not contain any identifier to identify the MS. Therefore it is possible that at the same time, more than one MS transmit a Channel Request message with the same content (i.e. same Establishment Cause with same Random Reference number on same time slot). So the network on receiving this request processes it, reserve resources and transmits Immediate Assignment message. Considering that the MSs receive this message and the data in the message corresponds to the content of Channel Request message transmitted by the respective MSs, the MSs assumes that a dedicate resource is reserved for it. This situation is referred to as "Contention Resolution" in GSM and the problem is solved by a contention resolution procedure.

The solution is the following: The MSs transmit a data link layer SABM frame containing a Layer 3 service request message (i.e. CM Service Request, Location Updating Request, IMSI Detach, Paging Response or CM Re-establishment Request). The service request message contains the identifier of the MS. The data link layer of the MS stores the content of this frame to perform contention resolution. The network returns this service request message in a data link layer UA (Unnumbered Acknowledgment) frame. The data link layer of the MS compares the content of the UA frame information field, i.e. of the service request message, with the content of the stored message. If the contents are not identical, the MS comes to a conclusion that its reservation failed. Then the MS terminates its traffic on said dedicated channel and eventually restarts a new reservation operation for a dedicated channel. If the contents are identical, the MS comes to a conclusion that the reservation succeeded and continues traffic on said dedicated channel.

Thursday, July 9, 2009

What are the different voice coding standards in GSM?

5 Voice codecs standardized in GSM:
  • Full-Rate (FR) Codec
  • Half-Rate (HR) Codec
  • Enhanced Full-Rate (EFR) Codec
  • Adaptive Multi-Rate (AMR) Codec
  • Adaptive Multi-Rate Wideband(AMR-WB) Codec
All Voice codecs include speech coding (source coding), channel coding (error protection and bad frame detection), Voice Activity Detection (VAD), lost speech frame substitution and muting and comfort noise insertion.

Speech coding is the application of data compression of digital audio signals containing speech.
Channel Coding is Forward Error Checking and Bit Interleaving.

The voice codecs either operate in GSM full-rate TCH at the gross rate of 22.8 kbps (FR, EFR, AMR-WB) or in half-rate TCH at gross rate of 11.4 kbps (HR), or in both (AMR). Encoding process is performed on a 20 ms speech frame at a time.

FR operates on speech coding rate of 13 kbps and channel coding rate of 9.8 kbps. The coding scheme used is RPE-LTP (Regular Pulse Excitation with Long-Term Prediction).
HR operates on
speech coding rate of 5.6 kbps and channel coding rate of 5.8 kbps. The coding scheme used is VSELP (Vector Sum Excited Linear Prediction). The VSELP algorithm is an analysis-by-synthesis coding technique and belongs to the class of speech coding algorithms known as CELP (Code Excited Linear Prediction).
EFR operates on speech coding rate of 12.2 kbps. The coding scheme used is ACELP (Algebraic Code Excited Linear Prediction).
AMR-HR operates on speech coding rate of one of 6 modes (4.75/5.15/5.9/6.7/7.4/7.95 kbps). The coding scheme used is ACELP (Algebraic Code Excited Linear Prediction).
AMR-FR operates on speech coding rate of one of 8 modes (4.75/5.15/5.9/6.7/7.4/7.95/10.2/12.2 kbps). The coding scheme used is ACELP (Algebraic Code Excited Linear Prediction).
AMR-WB operates on speech coding rate of one of 9 modes (6.6/8.85/12.65/14.25/15.85/18.25/19.85/23.05/23.85 kbps). The coding scheme is ACELP.

HR, FR, EFR and AMR operates on 3.4 kHz band (narrow band) and sampling rate of 8kHz (8000 samples/sec) while AMR-WB operates on 7 kHz band (wide band) and sampling rate of 16kHz (16000 samples/sec).

Brief advantages of different codecs
  • HR effectively doubles network capacity as compared to FR.
  • EFR imporves speech quality and is higly robust to network impairments.
  • AMR is specifically designed to improve link robustness. AMR supports dynamic adaptation to network conditions, using lower bit rates during network congestion or degradation while preserving audio quality. By trading off the speech bit rate to channel coding, AMR maximizes the likelihood of receiving the signal at the far end.
  • AMR-WB provides excellent speech quality due to wider speech bandwidth.

Monday, July 6, 2009

What is Frequency Hopping? Why is it necessary?

A common transmission problem in GSM or any wireless telephony system is of Fading. This occurs as a result of the shadowing (fading) or multipath (fading). Shadowing effect is produced by buildings and natural obstacles such as hills located between the transmitting and receiving antennas of a Mobile Station (MS) and a Base Station (BS). As the MS moves around, the received signal strength increases and decreases as a function of the type of obstacles that are present at that moment between the transmitting and receiving antennas. Multipath effect occurs when the transmitted signal takes more than one path from the transmitting antenna to the receiving antenna so that the receiving antenna of the MS receives not just one signal (direct) but several copies of it (reflected and deviated from obstructions). The latter signals are delayed slightly in phase from one another. The reception of several versions of the same signal shifted in phase from one another results in the vector sum of the signals, the resultant composite signal being actually received at the receiving antenna. In some instances the vector sum of the received signal may be very low or even close to zero which can cause the received signal to virtually disappear.

In addition to fading, a dense GSM network suffers with co-channel interference. This means that a phone call is interfered with calls in another site operating on the same physical channel and time slot.

To compensate for above transmission difficulties in GSM, a technique called Frequency Hopping is used . Frequency Hopping is the technique of improving the signal to noise ratio in a link by adding frequency diversity. It means that MS sequentially communicates with a BS on different frequencies. The BS commands the MS to activate frequency hopping as the MS moves towards the edge of a cell or into an area of high interference. When frequency hopping is activated in the MS, the BS assigns the MS a set of RF channels, rather than a single RF channel. A frequency hopping algorithm is also assigned to the mobile and is used to inform the mobile of the pattern of available frequencies it has to use.

How frequency hopping compensates for co-channel interference and fading?
Consider co-channel interference. Not all of the slots are in use on all of the physical channels on each site where they are reused, so although slot 4 on channel 555 might be clobbered by another conversation, slot 7 on channel 522 probably isn’t. So, if we can take each caller on a particular sector and jump them from slot to slot, and from frequency to frequency, then each user runs a far lower risk of suffering from co-channel interference. And when such interference does occur, chances are good that the error correction algorithms can take care of it.
Now consider multipath. Due to the very high frequency of GSM service the wavelength of the signals is extremely short (only a few inches in fact). That means the phase difference on one channel will be quite different than on another. By jumping from frequency to frequency we may only experience problematic multipath for very short periods of time, once again giving the error correction algorithms the chance to clean it up.

Frequency hopping can be of two types: fast and slow. For fast frequency hopping, the rate at which the frequency changes is higher than the signal modulation rate. In the GSM system, the frequency is required to remain unchanged during a burst period. The frequency hopping in the GSM system is slow frequency hopping.

There are two kinds of hopping algorithm:
  • Cyclic hopping: The transceiver hops through a fixed repeated pattern of frequencies
  • Pseudo-random hopping: The transceiver hops through the list of frequencies in a random manner.
Specific parameters of the channel, defined in the channel assignment message that MS uses during Frequency Hopping are:
  • MA: Mobile allocation of radio frequency channels, defines the set of radio frequency channels to be used in the mobiles hopping sequence. The MA contains N radio frequency channels, where 1 ≤ N ≤ 64.
  • MAIO: Mobile allocation index offset (0 to N-1, 6 bits).
  • HSN: Hopping sequence (generator) number (0 to 63, 6 bits).

There are a total of 64 different frequency patterns. The hopping sequence the MS uses depends on the HSN; a HSN of 0 corresponds to Cyclic hopping sequence and values 1 to 63 correspond to pseudo random patterns. The ARFCNs used in the hopping sequence pattern are determined by the contents of MA Table. The entry of the MA Table at which the hopping sequence begins is called the MAIO. An MAIO of 0 corresponds to the first entry (lowest ARFCN) of the MA Table.


Frequency Hopping in BTS
There are two types of frequency hopping method available for the BTS: RF frequency hopping and Baseband frequency hopping.
  • RF (synthesizer) Frequency Hopping - In this method, the BTS-TRX itself changes frequencies according to the hopping sequence. So, one TRX would hop between multiple frequencies in the same sequence that the MS is required to.
  • Baseband Frequency Hopping - In this method, BTS has several TRXs and each TRX is modulated to a fixed frequency and allocated with a fixed ID. Each TRX would be assigned a single time slot within a TDMA frame. For example, time slot 1 might be assigned to TRX 2 in one TDMA frame and in the next TDMA frame it would be assigned to TRX 3, and the next frame would be TRX 3. So, the data on each time slot would be sent on a different frequency each frame, but the TRXs on the BTS do not change frequency. The BTS simply routes the data to the appropriate TRX, and the MS knows which TRX to be on for any given TDMA frame.
Note: On the RF channel carrying a BCCH (C0), frequency hopping is not permitted on any timeslot supporting a BCCH. A non-hopping radio frequency channel sequence is characterized by a mobile allocation consisting of only one radio frequency channel, i.e. with N=1, MAIO=0. In this instance sequence generation is unaffected by the value of the value HSN.

Wednesday, July 1, 2009

What is Timing Advance (TA)? Why TA is necessary?


BTS requires fixed synchronization between both channels, uplink and downlink (i.e. a distance of 3 Burst Periods (BP) or 3 TSs = 1.73 ms) [Why: The reason for this delay is to allow the same TS number to be used in both uplink and downlink directions without requiring the MS to receive and transmit simultaneously In this way, same antenna (transreceiver) can be used for transmission as well as reception]. Mobile Station (MS) has to send its data to BTS 3 timeslots (TS) after it receives data from BTS. So, BTS is configured to receive bursts in a proper time frame.
Now due to propagation delay, the uplink data from MS reaches BTS beyond the stipulated time. So, BTS calculates this delay and asks the MS to send uplink data earlier than that defined by "three time slots" rule.

Timing Advance = 2 times Propagation delay
The reason being 2 times is that there is a round trip propagation delay, BTS-MS-BTS. That is,
- MS receives downlink data from BTS with a delay
- then MS sends uplink data after this delay and it reaches to BTS with an equal additional propagation delay.

So these two delays are added, due to which MS should send uplink data in a way that the downlink and uplink propagation delays are nullified. In other words, MS advances its timing by this amount, with the result that signals from different MS's arriving at the BTS are compensated for propagation delay. This process is called "adaptive frame alignment".

TA can take value from 0 to 63 (expressed in time as 0 to 232 microsecond, in steps of 48/13 microsecond). If TA is known, the distance between MS and BTS can be calculated. Here, each step corresponds to 550 m.