Archive for the ‘Technology Basics’ Category

ACM Bangalore Chapter was started in 2006 and today it is said to be the most active chapters in India. They conduct a regular monthly TechTalk and I was invited to be the speaker at this month’s event. I was given the liberty to choose the topic. I decided to talk about various aspects of security in wireless cellular systems.

Although I had planned for a 90-minute talk, it stretched an hour more. The audience was more curious than I had expected. The questions were intelligent. The session was quite interactive and it suited well the size of the group. About 50 attended this talk.

I do not intend to write about what I spoke. Slides of my talk be seen on ACM Bangalore’s website. I would like to touch upon some of the interesting questions that the audience posed.

Question#1 – How can a 3G phone with a GSM SIM work on a 3G network?

We must remember that ultimately everything hinges on the security context, which can be either GSM or UMTS. In either case, the same security context should be enabled on the AuC. So if GSM SIM is used, the security context on the AuC ought to be GSM, say a R98- AuC. Triplets are generated and passed on to the VLR or SGSN. Since VLR/SGSN are R99+ and they use UTRAN RAN, VLR/SGSN will have standardized conversion functions (c4 and c5) to convert Kc to CK and IK. CK and IK are then used within UTRAN RAN for securing the air interface.
Question#2 – Does number portability mean that data within an AuC is compromised?

Not really. Number portability does not mean sensitive data from old AuC are transferred to the new AuC. The new operator will issue a new USIM which will have a new IMSI. Number portability only means that MSISDN is kept the same for others to call the mobile. The translation between MSISDN and IMSI is done at a national level register. Such a translation will identify the Home PLMN and the HLR that’s needs to be contacted for an incoming call.

That’s the theory and that’s how it should be done. It will be interesting to know how operators in India do this.
Question#3 – If I am roaming, is the AuC of the visited PLMN involved in AKA?

We know that algorithms in the SIM and AuC are proprietory and kept secret by the operator. So if I am roaming to another PLMN, will that be compromised? The answer is no. Even when roaming, the visited PLMN will contact the HLR of the Home PLMN. It is the HLR which then works with the AuC to perform AKA for the subscriber. Conclusion is that even in the case of roaming, AKA is performed only by the AuC of the Home PLMN. No other AuC is involved.

Question#4 – Why do we have Counter Check Procedure in RRC when we will anyway be unable to decrypt encrypted data if counters are not synchronized?

This procedure was introduced to prevent “man-in-the-middle” attacks. The procedure is invoked to check that all counters are synchronized. It is true that if the receiver is unable to decrypt an already encrypted message, we can probably say that the counters have gone out of synchronization. However, such a case may arise for radio bearers transmitting data. What about those bearers which are idle? Also, RLC-UM and RLC-AM will not know if data has been corrupted or bogus. Only the application can determine that. This procedure facilitates the check of counters on all radio bearers. This gives the network more information. It may close the RRC connection or it may decide to inform MM to start a new AKA.

Question#5 – When changing ciphering key in UMTS, how is the transition from old to new keys managed?

There are activation times within the Security Mode procedure at RRC. Security Mode Command contains RLC SN (RLC UM and AM) and CFN (RLC TM) when the change will be activated on the DL. For the UL, UE send back in the Security Mode Complete the RLC SN at which the change will be made. In addition to this, RLC transmission is suspended on all bearers with exception of the SRB on which the procedure is executed. This is a precaution that takes into account a slow response in receiving Security Mode Complete. Even when RLC entities are suspended they are commanded to suspend only after a certain number of PDUs.

Question#6 – What’s the use of FRESH as an input to f9 integrity algorithm in UMTS?

Changing FRESH gives additional protection without requiring a new AKA for key refreshment. This may happen for instance after SRNS Relocation. However, I have no insights into actual network implementations in this regard.

Question#7 – At which layer do ciphering and integrity happen?

GSM – ciphering happens at PHY in MS and BTS.

GPRS – ciphering happens at LLC in MS and SGSN.

UMTS – ciphering happens at RLC (for UM and AM) and MAC (RLC-TM) in UE and RNC. Integrity happens at RRC in UE and RNC.

Question#8 – When we enter a new location area and Location Updating Procedure is initiated, will it also involve AKA?

Not necessarily. If the CKSN/KSI sent in the Location Updating Request is a valid value and network decides that current keys can continue to be used, no new AKA will be started. For this to be possible, the new VLR must be able to contact the old VLR to retrieve the security context of the mobile.

Read Full Post »

This morning I attended a two-hour presentation by Ankita Garg of IBM, Bangalore. The event was organized by ACM Bangalore at the premises of Computer Society of India (CSI), Bangalore Chapter on Infantry Road. It was about making Linux into an enterprise real-time OS. Here I present a summary of the talk. If you are more curious, you can download the complete presentation.


How does one define a real-time system? It’s all about guarantees made on latency and response times. Guarantees are made about an upper bound on the response time. For this to be possible the run-time behaviour of the system must be optimized by design. This leads to deterministic response times.

We have all heard of soft and hard real time systems. The difference between them is that the former is tolerant to occasional lapse in the guarantees while the latter is not. A hard real-time that doesn’t meet its deadline is said to have failed. If a system can meet its deadline 100% of the time then it can be formally called a hard real-time system.

Problems with Linux

Linux was not designed for real-time behaviour. For example, when a system call is made, kernel code is executed. Response time cannot be guaranteed because during its execution an interrupt could occur. This introduces non-determinism to response times.

The scheduling in Linux tries to achieve fairness while considering the priority of tasks. This is the case with priority based Round Robin scheduling. Scheduling is unpredictable because priorities are not always strict. A lower priority process could be running at times until the scheduler gets a chance to reschedule a higher priority process. Kernel is non-pre-emptive. When kernel is executing critical sections interrupts are disabled. One more problem is the resolution of timer which is at best 4 ms and usually 10 ms.

Solutions in the Real-Time Branch

A branch of development has been forked from the main branch. This is called CONFIG_PREEMPT_RT. This contains enhancements to support real-time requirements. Some of these enhancements have been ported to the main branch as well.

One important change is on spin locks. These are more like semaphores. Interrupts are not disabled so that these spin locks are pre-emptive. However, spin locks can be used in the old way as well. It all depends if the APIs are called for spinlock_t or raw_spinlock_t.

The sched_yield has a different behaviour too. A task that calls this is added back to the runnable queue but not at the end of the queue. Instead it is added to its right level of priority. This means that a lower priority process can face starvation. If such a thing does happen, it is only because design is faulty. Users need to consider setting correct priorities to their tasks. There is still the problem of priority inversion which is usually overcome using priority inheritance.

There is also the concept of push and pull. In a multiprocessor system, decision has to be made about the CPU where a task will run. A task waiting in the runnable queue of a particular CPU can be pushed or pulled to another CPU depending on tasks just completed.

Another area of change is IRQ. IRQ is kept simple while the processing is moved to an IRQ handler. There was some talk on soft IRQ, top half and bottom half, something I didn’t understand. I suppose these will be familiar to those who have worked on interrupt code on Linux.

In plain Linux, timers are based on the OS timer tick. This does not give high resolution. High resolution is achieved by using programmable interrupt timers, which requires support from hardware. Thus timers are separated from the OS timer ticks.

Futex is a new type of mutex that is fast if uncontested. It happens in the user space. Only if the mutex is busy it goes to kernel space and it takes the slower route.

In IBM, the speaker mentioned the tools she had used to tune the system for real-time: SystemTap, ftrace, oprofile, tuna

Proprietary Solutions

Other than what’s been discussed above, other solutions are available – RTLinux, L4Linux, Dual OS/Dual Core and using Virtual Logic, timesys… There was not a lot of discussion about these implementations.

Enterprise Real-Time

Why is real-time behaviour important for enterprises? This is because enterprises make guarantees through Service Level Agreements (SLA). They guarantee certain maximum delays which can only be achieved on an RTOS. The greater issue here is that such delays are not limited to the OS. Delays are as perceived by users. This means that delays at the application layer have to be considered too. It is easy to see that designers have to first address issues of real-time at the OS level before considering the same at the application layer.

The presenter gave application examples based on Java. Java, rather than C or C++, is more common for enterprise solutions these days than perhaps a decade ago. The problem with Java is that there are factors that introduce non-determinism:

  • Garbage collection: runs when system is low on free memory

  • Dynamic class loading: loads when the class is first accessed

  • Dynamic compilation: compiled once when required

Solutions exist for all these. For example, the garbage collector is made to run periodically when the application task is inactive. This makes response times more deterministic. Static class loading can be performed in advance. Instead of just-in-time (JIT) compilation, ahead-of-time compilation can be done – this replaces the usual *.jar files with *.jxe files. Some of these are part of IBM’s solution named WebSphere Real-Time.

There is wider community that is looking at RTSJ – Real-Time Specifications for Java.


Real-time guarantee is not just about the software. It is for the system as a whole. Hardware may provide certain functionality to enable real-time, as we have seen for the case of higher resolution of timers. Since real-time behaviour is about response times, in some cases performance may be compromised. Certain tasks may be slower but this is necessarily so because they there far more important tasks that need a time guarantee. There is indeed a trade-off between performance and real-time requirement. On average, real-time systems may not have much better response times than normal systems. However, building a real-time system is not about averages. It is about an absolute guarantee that is far difficult to meet on a non-real-time system.

Read Full Post »

OFDM has been the accepted standard for digital TV broadcasting for more than a decade. European DAB and DVB-T standards use OFDM. HIPERLAN 2 standard is also using OFDM techniques and so is the 5 GHz extension of IEEE 802.11 standard. ADSL and VDSL use OFDM. More recently, IEEE 802.16 has standardized OFDM for both Fixed and Mobile WiMAX. The cellular world is not left behind either with the evolving LTE embracing OFDM. What is it about OFDM that makes a compelling case for widespread adoption in new standards?

Inter-symbol Interference (ISI)

One fundamental problem for communication systems is ISI. It is a fact that every transmission channel is time-variant. Two adjacent symbols are likely to experience different channel characteristics including time delays. This is particularly true in wireless channels and mobile terminals communicating in multipath conditions. For low bit rates (narrowband signal), the symbol rate is sufficiently long so that delayed versions of the signal all arrive with the same symbol. They do not spill over to subsequent symbols and therefore there is no ISI. As data rates go up and/or the channel delay increases (wideband signal), ISI starts to occur. Traditionally, this has been overcome by equalization techniques, linear predictive filters and rake receivers. This involves estimating the channel conditions. This works well if the number of symbols to be considered is low. Assuming BPSK, a data rate of 10 Mbps on a channel with a maximum delay of 10 µs would need equalization over 100 symbols. This would be too complex for any receiver.

In HSDPA, data rate is as high as 14.4 Mbps. But this uses QAM16 and therefore the baud rate is not as high. Using a higher level modulation requires better channel conditions and a higher transmit power for correct decoding. HSDPA also uses multicode transmission which means that not all of the data is carried on a single code. The load is distributed on the physical resources thus reducing ISI further. Today the need is for even higher bit rates. A higher modulation scheme such as QAM64 may be employed but this would require higher transmission power. What could be a possible solution for solving the ISI problem at higher bit rates?

Orthogonal Frequency Division Multiplexing (OFDM)

Initial proposals for OFDM were made in the 60s and the 70s. It has taken more than a quarter of a century for this technology to move from the research domain to the industry. The concept of OFDM is quite simple but the practicality of implementing it has many complexities. A single stream of data is split into parallel streams each of which is coded and modulated on to a subcarrier, a term commonly used in OFDM systems. Thus the high bit rates seen before on a single carrier is reduced to lower bit rates on the subcarrier. It is easy to see that ISI will therefore be reduced dramatically.

This sounds too simple. When didn’t we think of this much earlier? Actually, FDM systems have been common for many decades. However, in FDM, the carriers are all independent of each other. There is a guard period in between them and no overlap whatsoever. This works well because in FDM system each carrier carries data meant for a different user or application. FM radio is an FDM system. FDM systems are not ideal for what we want for wideband systems. Using FDM would waste too much bandwidth. This is where OFDM makes sense.

In OFDM, subcarriers overlap. They are orthogonal because the peak of one subcarrier occurs when other subcarriers are at zero. This is achieved by realizing all the subcarriers together using Inverse Fast Fourier Transform (IFFT). The demodulator at the receiver parallel channels from an FFT block. Note that each subcarrier can still be modulated independently. This orthogonality is represented in Figure 1 [1].

Figure 1: OFDM Subcarriers in Frequency Domain
OFDM Subcarriers in Frequency Domain

Ultimately ISI is conquered. Provided that orthogonality is maintained, OFDM systems perform better than single carrier systems particularly in frequency selective channels. Each subcarrier is multiplied by a complex transfer function of the channel and equalising this is quite simple.

Basic Considerations

An OFDM system can experience fades just as any other system. Thus coding is required for all subcarriers. We do get frequency diversity gain because not all subcarriers experience fading at the same time. Thus a combination of coding and interleaving gives us better performance in a fading channel.

Higher performance is achieved by adding more subcarriers but this is not always possible. Adding more subcarriers could lead to random FM noise resulting in a form of time-selective fading. Practical limitations of transceiver equipment and spectrum availability mean than alternatives have to be considered. One alternative is to add a guard band in the time domain to allow for multipath delay spread. Thus, symbols arriving late will not interfere with the subsequent symbols. This guard time is a pure system overhead. The guard time must be designed to be larger than the expected delay spread. Reducing ISI from multipath delay spread thus leads to deciding on the number of subcarriers and the length of the guard period. Frequency-selective fading of the channel is converted to frequency-flat fading on the subcarriers.

Since orthogonality is important for OFDM systems, synchronization in frequency and time must be extremely good. Once orthogonality is lost we experience inter-carrier interference (ICI). This is the interference from one subcarrier to another. There is another reason for ICI. Adding the guard time with no transmission causes problems for IFFT and FFT, which results in ICI. A delayed version of one subcarrier can interfere with another subcarrier in the next symbol period. This is avoided by extending the symbol into the guard period that precedes it. This is known as a cyclic prefix. It ensures that delayed symbols will have integer number of cycles within the FFT integration interval. This removes ICI so long as the delay spread is less than the guard period. We should note that FFT integration period excludes the guard period.

Advanced Techniques

Although subcarriers are orthogonal, a rectangular pulse shaping gives rise to a sinc shape in the frequency domain. Side lobes delay slowly producing out-of-band interference. If frequency synchronization error is significant, this can result in further degradation of performance due to these side lobes. The idea of soft pulse shaping has been studied such as using Gaussian functions. Although signal decays rapidly from the carrier frequency, the problem is that orthogonality is lost. ISI and ICI can occur over a few symbols. Therefore equalization must be performed. There are two advantages – equalization gives diversity gain and soft impulse shaping results in more robustness to synchronization errors. However, diverisy gain be obtained with proper coding and out-of-band interference can be limited by filtering. Thus, the technique of channel estimation and equalization seems unnecessary for OFDM systems [2].

Frame and time synchronization could be achieved using zero blocks (no transmission). Training blocks could be used. Periodic symbols of known patterns could be used. These serve to provide a rough estimate of frame timing. The guard period could be used to provide more exact synchronization. Frequency synchronization is important to minimize ICI. Pilot symbols are used to provide an estimate of offsets and correct for the same. Pilot symbols are used where fast synchronization is needed on short frames. For systems with continuous transmission, synchronization without pilot symbols may be acceptable if there is no hurry to get synchronized.

One of the problems of OFDM is a high peak-to-average ratio. This causes difficulties to power amplifiers. They generally have to be operated at a large backoff to avoid out-of-band interference. If this interference is to be lower than 40 dB below the power density in the OFDM band, an input backoff of more than 7.5 dB is required [2]. Crest factor is defined as the ratio of peak amplitude to RMS amplitude. Crest factor reduction (CFR) techniques exist so that designers are able to use a cheaper PA for the same performance. Some approaches to CFR are described briefly below:

  • Only a subset of OFDM blocks that are below an amplitude threshold are selected for transmission. Symbols outside this set are converted to the suitable set by adding redundancy. These redundant bits could also be used for error correction. In practice, this is method is practical only for a few subcarriers.
  • Each data sequence can be represented in more than one way. The transmitter choses one that minimises the amplitude. The receiver is indicated of the choice.
  • Clipping is another technique. Used with oversampling, it causes out-of-band interference which is generally removed by FIR filters. These filters are needed anyway to remove the side lobes due to rectangular pulse shaping. The filter causes new peaks (passband ripples) but still peak-to-average power ratio is reduced.
  • Correcting functions are applied to the OFDM signal where peaks are seen while keep out-of-band interference to a minimum. If many peaks are to be corrected, then entire signal has to be attenuated and therefore performance cannot be improved beyond a certain limit. A similar correction can be done by using a additive function (rather than multiplicative) with different results.

One of the problems of filtering an OFDM signal is the passband ripple. It is well-known in filter design theory that if we want to minimize this ripple, the number of taps on the filter should be increased. The trade-off is between performance and cost-complexity. A higher ripple leads to higher BER. Ripple has a worse effect in OFDM systems because some subcarriers get amplified and others get attenuated. One way to combat this is to equalize the SNR across all subcarriers using what is called digital pre-distortion (DPD). Applying DPD before filtering increases the signal power and hence out-of-band interference. The latter must be limited by using a higher attenuation outside the passband as compared to a system without predistortion. The sequence of operations at the transmitter would be as represented in Figure 2.

Figure 2: Typical OFDM Transmitter Chain
Typical OFDM Transmitter Chain


  1. L.D. Kabulepa, OFDM Basics for Wireless Communications, Institute of Microelectronic Systems, Darmstadt University of Technology.
  2. Andreas F. Molisch (Editor), Wideband Wireless Digital Communications, Chapter 18; Pearson Education, 2001.

Read Full Post »

Testing using TTCN-3

I had previously used TTCN-2 and I never really liked it. It appeared as a cumbersome language that forced you to do more work than needed. The resulting test code was neither intuitive nor elegant. Tools that existed did not enhance the experience in any way. Thankfully I never had to deal with TTCN-2 in great depth. The most I did was to improve lightly on existing test cases or give my inputs to test teams as and when required.

Part of this bias I guess comes from my lack of knowledge of TTCN-2. Last week, I attended a three-day training on TTCN-3. Now I know a great deal more about TTCN as a test language. I know the capability of TTCN-3 and the vast improvements it makes over TTCN-2. I am now all excited to design a test framework that relies on TTCN-3 for test execution, test interfaces and test case suites.


TTCN previously referred to Tree and Tabular Combined Notation. This was understandable because test cases were in tabular formats. They contained many levels of indentation that could be regarded a tree-like structure. With TTCN-3, the abbreviation refers to Testing and Test Control Notation. The focus is on testing and not really how those test cases are written. Yes, we can still write test cases in the old way of TTCN-2 but that’s not the only way.

Figure 1 gives an overview of TTCN-3 [1]. As we can see, test cases can be written directly in TTCN-3 core language (such a concept did not exist in TTCN-2), in tabular format or in graphical format. The standard also allows for newer presentations that could interface with the core language. For example, it’s perfectly valid for someone to write test cases in XML and have a conversion mechanism to the core language. Needless to say, an XML presentation format will remain proprietary with no tool support unless it gets standardized.

Figure 1: TTCN-3 OverviewTTCN-3 Overview

The second fact that becomes obvious from Figure 1 is that the core language interfaces with different other languages. These interfaces facilitate the reuse of existing data types and definitions that might have been defined in those languages. For example, UMTS RRC signalling definitions are in ASN.1. For the test engineer, there is no need to convert such definitions into TTCN-3. Any respectable tool in the market must be able to interface directly to these definitions and handle them seamlessly as part of TTCN-3 core language implementation.


At this point it is appropriate to see what’s the format of TTCN-3 core language. This is nothing more than simple text with well-defined syntax and semantics. The syntax is defined using Backus-Naur Form. What this means is that any text editor can be used to write TTCN-3 test cases. Such test cases are quite different in dynamic behaviour from C or Pascal. Still, it is quite easy for programmers well-versed with procedural languages to get used to TTCN-3 easily. There are many similarities – keywords, data types, variables, control statements, functions, operators, operator precedence, just to name a few.

Looking at the differences between TTCN-2 and TTCN-3, Table 1 illustrates an important point with regard to indentation. In TTCN-2, many levels of indentation lead to poor code readability and excessive scrolling in editors. With each alternative, there is code duplication (S4) which can be solved only if S4 is implemented in a reusable test step. Alternatives in TTCN-3 are more elegantly specified and control flow continues at the same level of indentation. Even the example in Table 1 can be simplied by defining default alternative behaviour earlier.

Table 1: TTCN-2 vs TTCN-3 Statements
TTCN-2 vs TTCN-3 Statements

Having the core language in text also makes it easier to look at differences in a version control system. At run time, it makes debugging at the level of TTCN source a lot easier. This is important for test case developers. I have never known anyone who did any similar debugging at TTCN-2 source. The best I have seen was engineers setting intermediate verdicts at lots of places to ascertain what went wrong and where.

The language is structured in a way that allows high level of flexibility. Test system definition is modular. In fact, an important unit of a test suite is a module which would contain one or more test cases or the control part of a test suite. Concurrency of operation is possible because components can execute in parallel. Of course, execution is serialized at the level of hardware unless the multi-processors are involved. Parameterization is possible just as it was possible in TTCN-2. Concepts of PICS and PIXIT still apply because they are fundamental to any conformance testing.


Test System

Figure 2 represents the test system based on TTCN-3 [2]. The modularity of the design is apparent. Adapters are distinct from the executable. Test management and codecs are distinct entities that interface to the executable. More importantly, interfaces TCI and TRI are standardized so that users have a choice of easily migrating from one tool vendor to another without needing to rewrite the test cases. TTCN-3 Control Interface (TCI) allows for interfacing to codec (TCI-CD) and to test management (TCI-TM). Likewise, TTCN-3 Runtime Interface (TRI) interfaces to the adapters. This interface does the translation between the abstraction in TTCN-3 and the behaviour in runtime.

Figure 2: TTCN-3 Test System
TTCN-3 Test System

The adapters are implemented in ANSI C or Java, which have been included in the standard. TTCN-3 allows for dynamic mapping of communication channels between the TTCN-3 executable and the adapters. This is one more area in which TTCN-3 does it better than TTCN-2 where such mapping was static.

Typical Test Cycle

The following would be present in a typical test cycle:

  • Implement the adapters in a chosen language (done only once per adapter per language of choice)
  • Implement the encoders/decoders in a chosen language (done only once per language of choice)
  • Implement the test cases in TTCN-3 (done only once)
  • Compile the test case and test suite (done only once unless test cases change) – at this stage an executable is formed from the abstract definitions
  • Link with adapters, codecs and test management (varies with tool implementation: may be a static link, runtime loading of library or inter-process communication)
  • Execute the test suite (debug if necessary)
  • Collate test results and correct the IUT (Implementation Under Test) if errors are seen


I have previously used tools from Telelogic but never really liked their GUI. Their tools have generally been the least user-friendly in my opinion. I hear from others who have evaluated their TTCN-3 support that they are now better. Telelogic is not doing just TTCN-3. They do a whole of things and I think their strength in TTCN-3 is not all that obvious.

Recently I evaluated TTWorkbench from Testing Technologies. It’s an excellent tool – easy to install and easy to use. It has good debugging support. It allows for test case writing in graphical format (GFT) and looking at logs in the same format. Naturally it also allows writing of test cases in core language format. The downside of this tool is that I found it to be slow in loading and building test suites. It uses Eclipse IDE.

Next I evaluated OpenTTCN. “Open” refers to openness of its interfaces which conform to open standards. This allows the tool to be integrated easily to other platforms using standardized TCI and TRI. With this focus, the tool claims to conform rigidly to all requirements of TTCN-3 standards. Execution is generally faster than other tools in the market. The company that makes this makes only this. Nearly 14 years of experience has gone into making this product and the execution environment is claimed to be the best. The downside is that the main user interface is primitive command line interface. There is no support for GFT although this is expected to arrive by the end of the year. Likewise, debugging capabilities are in development phase and are expected to be rolled out sometime this year. OpenTTCN also relies on certain free tools such as TRex that is the front-end editor with support for TTCN-3 syntax checking. This too is based on Eclipse.

This is just a sample. There are lots of other tools out there. Some are free with limited capability and others are downright expensive. Some are proprietory. One example in this regard is the General Test Runner (GTR), a tool used in Huawei Technologies.


TTCN-3 is set to become a major language for formal test methodology. WiMAX is using it. SIP tests have been specified in TTCN-3. LTE is starting to use it. Other telecommunications standards are using it as well and its use has split over to other sectors. The automotive sector is embracing it. AutoSAR is using it and these test cases may be available quite soon this year. The official website of TTCN-3 is full of success stories.

It is not just for conformance testing like its predecessor. Its beginning to be used for module testing, development testing, regression testing, reliability testing, performance testing and integration testing. TTCN-3 will work with TTCN-2 for some time to come but for all new test environments it will most likely replace TTCN-2 as the language of choice.


  1. Jens Grabowski et al., An Introduction into the Testing and Test Control Notation (TTCN-3).
  2. Mario Schünemann et al., Improving Test Software using TTCN-3, GMD Report 153, GMD 2001.

Read Full Post »

Older Posts »