Feeds:
Posts
Comments

Earlier today I attended the Wireless Test World 2008 at the Le Meridien, Bangalore, a day-long seminar presented by Agilent Technologies. My particular interest was in learning about the test solutions they had in place for WiMAX and LTE standards. The former is fairly mature in both its flavours – Fixed and Mobile. The latter is not only new but also incomplete.

Lots of items in LTE standardization are still for further study (FFS). As can be expected, Agilent’s solution for LTE is an initial offering to test early PHY layer implementations. A full-fledged solution that incorporates a validated Conformance Test suite for full stack testing is still sometime away. Core standards are getting ready right now. We can expect the first test specifications later this year. Something concrete can be expected on this front from Agilent at about the same time since they are involved closely in the standardization process. Building upon their existing partnership with Anite for the conformance of GSM/UMTS implementations, LTE conformance solution from Agilent will take a similar route.

The interest on the day was greater for WiMAX, arguably because more companies in India are working on it than on LTE. The immediate future may be more promising for WiMAX, but LTE stands an equal chance from 2011 and beyond.

The seminar was primarily presentations about established and emerging technologies, and the test capabilities Agilent offers for these. There was sufficient time in between to look at Agilent’s test solutions and see live demonstrations of their equipment. The keynote address by Mr Bhaktha Keshavachar was about the challenges faced in integrating wireless connectivity into mobile devices. In particular, the future is about bringing different radio standards into the laptop. WiFi and Bluetooth are already standard on almost all laptops. We expect the same with GPS, 3G, WiMAX and LTE in the coming years. Well, some of these models are already out in the market. We are not talking about external plugin modules but about MiniCards fully integrated into the laptop. Apparently, MiniCard are standardized and they have an extremely small form factor. Key challenges are the co-existence of multiple radio standards on the same device (interference), platform noise, high peak currents, overheating, inter-working, system performance, battery life… let’s be assured that there may challenges that we may not yet be aware of.

When it comes to WiMAX, Agilent has an impressive line-up of equipment to meet all needs of a WiMAX programme – R&D, Design Verification, Pre-Conformance, Conformance, Manufacturing, Network Deployment, Service Assurance. An OFDM signal analyzer was available as early as 2004 and a signal generator in 2005. A one-box solution was available in 2007 which today has Wave 2 and MIMO functionalities.

Agilent WiMAX Test Equipment

Agilent WiMAX Test Equipment

There are many hardware models with accompanying software packages which are sold as licensed options. These support standard WiMAX test requirements – signal generation, signal analysis, modulation analysis, constellation diagrams, power measurements, peak-to-average metrics, spectral flatness, ACLR measurement, CCDF measurement, SEM measurement. This includes support for OFDM, OFDMA and MIMO.

Small companies with limited budget would have to make choices. The availability of similar equipment under different model numbers can make it difficult to choose the right one. The best option is to talk to the sales team and get as much information as possible. It’s about knowing if a particular choice meets one’s requirement. It’s also about knowing if we are buying more than what we really need.

Based on my understanding, I have put together a subset of WiMAX test equipment from Agilent. This covers only equipment specific to WiMAX. Of course, there are lots of complementary equipment that can be used for any radio standard – power supplies, logic analyzers, oscilloscopes, battery drain analysis equipment and others.

Model Number

Name

Remarks

N5181A MXG

Vector signal generator

  • Upto 6GHz.

N5182A MXG

Vector signal generator

  • Upto 6GHz.

  • Has capability to be used with N7615B.

E4438C ESG

Vector signal generator

  • Has capability to be used with N7615B.

E8267D PSG

Vector signal generator

  • Has capability to be used with N7615B.

N7615B

Signal Studio for 802.16 WiMAX

  • Software that can be used with vector signal generators.

  • Enables application specific signal generation without having to spend time in signal creation.

N9010A EXA

Signal Analyzer

  • Option 507 enables operation upto 7 GHz. Higher options are available.

  • Better value for money than MXA series.

  • Sophisticated user interface with colour coding of channels.

  • Provides enhanced spectrum analysis.

  • Provides support for measurement applications as optional extra – N9075A is for WiMAX.

  • Generally comes with 89600 series vector signal analysis software. Examples are 89601A and 89601X.

N9020A MXA

Signal Analyzer

  • Higher specs of N9010A. For example, has WiFi testing capability which its EXA counterpart doesn’t have.
89601M

Vector signal analysis measurement application

  • Can be used with N9010A EXA.

89601X

VXA signal analyzer measurement application

  • Can be used with N9010A EXA.

N9075A

WiMAX measurement application

  • Can be used with signal analyzers N9010A and N9020A.

  • Enables WiMAX specific signal analysis.

N8300A

Wireless Networking Test Set

  • One box solution with signal generator and analyzer.

  • Only for Mobile WiMAX.

  • Generally preferred over E6651A for manufacturing.

  • Used with N6301A.

N6301A

WiMAX measurement application

  • Used for WiMAX transmitter testing.

  • Used with N8300A.

E6651A

Mobile WiMAX Test Set

  • One box solution with signal generator and analyzer.

  • Only for Mobile WiMAX.

  • For R&D, pre-conformance and conformance testing.

  • For conformance testing, used with N6430A.

  • Used for Radiated Performance Test (RPT) by ETS-Lindgren.

N6430A

WiMAX IEEE 802.16-2004/Cor2 D3 PCT

  • For Protocol Conformance Test (PCT).

  • Based on E6651A.

  • In partnership with Anite supplied software.

  • TTCN-3 runtime environment and interfaces from Testing Technologies.

N6422C

WiMAX Wireless Test Manager

  • Software with ready-to-use tests.

  • Simplifies automated testing but not as formal as TTCN-3 based testing.

  • Can be used for pre-conformance testing.

  • Can be used with E6651A.

  • Can be used with E4438C ESG or N5182A MXG with N7615A/B license.

Note: It’s easy to find information about WiMAX on Agilent’s website. Go to URL http://www.agilent.com/find/wimax

An Overview of OFDM

OFDM has been the accepted standard for digital TV broadcasting for more than a decade. European DAB and DVB-T standards use OFDM. HIPERLAN 2 standard is also using OFDM techniques and so is the 5 GHz extension of IEEE 802.11 standard. ADSL and VDSL use OFDM. More recently, IEEE 802.16 has standardized OFDM for both Fixed and Mobile WiMAX. The cellular world is not left behind either with the evolving LTE embracing OFDM. What is it about OFDM that makes a compelling case for widespread adoption in new standards?

Inter-symbol Interference (ISI)

One fundamental problem for communication systems is ISI. It is a fact that every transmission channel is time-variant. Two adjacent symbols are likely to experience different channel characteristics including time delays. This is particularly true in wireless channels and mobile terminals communicating in multipath conditions. For low bit rates (narrowband signal), the symbol rate is sufficiently long so that delayed versions of the signal all arrive with the same symbol. They do not spill over to subsequent symbols and therefore there is no ISI. As data rates go up and/or the channel delay increases (wideband signal), ISI starts to occur. Traditionally, this has been overcome by equalization techniques, linear predictive filters and rake receivers. This involves estimating the channel conditions. This works well if the number of symbols to be considered is low. Assuming BPSK, a data rate of 10 Mbps on a channel with a maximum delay of 10 µs would need equalization over 100 symbols. This would be too complex for any receiver.

In HSDPA, data rate is as high as 14.4 Mbps. But this uses QAM16 and therefore the baud rate is not as high. Using a higher level modulation requires better channel conditions and a higher transmit power for correct decoding. HSDPA also uses multicode transmission which means that not all of the data is carried on a single code. The load is distributed on the physical resources thus reducing ISI further. Today the need is for even higher bit rates. A higher modulation scheme such as QAM64 may be employed but this would require higher transmission power. What could be a possible solution for solving the ISI problem at higher bit rates?

Orthogonal Frequency Division Multiplexing (OFDM)

Initial proposals for OFDM were made in the 60s and the 70s. It has taken more than a quarter of a century for this technology to move from the research domain to the industry. The concept of OFDM is quite simple but the practicality of implementing it has many complexities. A single stream of data is split into parallel streams each of which is coded and modulated on to a subcarrier, a term commonly used in OFDM systems. Thus the high bit rates seen before on a single carrier is reduced to lower bit rates on the subcarrier. It is easy to see that ISI will therefore be reduced dramatically.

This sounds too simple. When didn’t we think of this much earlier? Actually, FDM systems have been common for many decades. However, in FDM, the carriers are all independent of each other. There is a guard period in between them and no overlap whatsoever. This works well because in FDM system each carrier carries data meant for a different user or application. FM radio is an FDM system. FDM systems are not ideal for what we want for wideband systems. Using FDM would waste too much bandwidth. This is where OFDM makes sense.

In OFDM, subcarriers overlap. They are orthogonal because the peak of one subcarrier occurs when other subcarriers are at zero. This is achieved by realizing all the subcarriers together using Inverse Fast Fourier Transform (IFFT). The demodulator at the receiver parallel channels from an FFT block. Note that each subcarrier can still be modulated independently. This orthogonality is represented in Figure 1 [1].

Figure 1: OFDM Subcarriers in Frequency Domain
OFDM Subcarriers in Frequency Domain

Ultimately ISI is conquered. Provided that orthogonality is maintained, OFDM systems perform better than single carrier systems particularly in frequency selective channels. Each subcarrier is multiplied by a complex transfer function of the channel and equalising this is quite simple.

Basic Considerations

An OFDM system can experience fades just as any other system. Thus coding is required for all subcarriers. We do get frequency diversity gain because not all subcarriers experience fading at the same time. Thus a combination of coding and interleaving gives us better performance in a fading channel.

Higher performance is achieved by adding more subcarriers but this is not always possible. Adding more subcarriers could lead to random FM noise resulting in a form of time-selective fading. Practical limitations of transceiver equipment and spectrum availability mean than alternatives have to be considered. One alternative is to add a guard band in the time domain to allow for multipath delay spread. Thus, symbols arriving late will not interfere with the subsequent symbols. This guard time is a pure system overhead. The guard time must be designed to be larger than the expected delay spread. Reducing ISI from multipath delay spread thus leads to deciding on the number of subcarriers and the length of the guard period. Frequency-selective fading of the channel is converted to frequency-flat fading on the subcarriers.

Since orthogonality is important for OFDM systems, synchronization in frequency and time must be extremely good. Once orthogonality is lost we experience inter-carrier interference (ICI). This is the interference from one subcarrier to another. There is another reason for ICI. Adding the guard time with no transmission causes problems for IFFT and FFT, which results in ICI. A delayed version of one subcarrier can interfere with another subcarrier in the next symbol period. This is avoided by extending the symbol into the guard period that precedes it. This is known as a cyclic prefix. It ensures that delayed symbols will have integer number of cycles within the FFT integration interval. This removes ICI so long as the delay spread is less than the guard period. We should note that FFT integration period excludes the guard period.

Advanced Techniques

Although subcarriers are orthogonal, a rectangular pulse shaping gives rise to a sinc shape in the frequency domain. Side lobes delay slowly producing out-of-band interference. If frequency synchronization error is significant, this can result in further degradation of performance due to these side lobes. The idea of soft pulse shaping has been studied such as using Gaussian functions. Although signal decays rapidly from the carrier frequency, the problem is that orthogonality is lost. ISI and ICI can occur over a few symbols. Therefore equalization must be performed. There are two advantages – equalization gives diversity gain and soft impulse shaping results in more robustness to synchronization errors. However, diverisy gain be obtained with proper coding and out-of-band interference can be limited by filtering. Thus, the technique of channel estimation and equalization seems unnecessary for OFDM systems [2].

Frame and time synchronization could be achieved using zero blocks (no transmission). Training blocks could be used. Periodic symbols of known patterns could be used. These serve to provide a rough estimate of frame timing. The guard period could be used to provide more exact synchronization. Frequency synchronization is important to minimize ICI. Pilot symbols are used to provide an estimate of offsets and correct for the same. Pilot symbols are used where fast synchronization is needed on short frames. For systems with continuous transmission, synchronization without pilot symbols may be acceptable if there is no hurry to get synchronized.

One of the problems of OFDM is a high peak-to-average ratio. This causes difficulties to power amplifiers. They generally have to be operated at a large backoff to avoid out-of-band interference. If this interference is to be lower than 40 dB below the power density in the OFDM band, an input backoff of more than 7.5 dB is required [2]. Crest factor is defined as the ratio of peak amplitude to RMS amplitude. Crest factor reduction (CFR) techniques exist so that designers are able to use a cheaper PA for the same performance. Some approaches to CFR are described briefly below:

  • Only a subset of OFDM blocks that are below an amplitude threshold are selected for transmission. Symbols outside this set are converted to the suitable set by adding redundancy. These redundant bits could also be used for error correction. In practice, this is method is practical only for a few subcarriers.
  • Each data sequence can be represented in more than one way. The transmitter choses one that minimises the amplitude. The receiver is indicated of the choice.
  • Clipping is another technique. Used with oversampling, it causes out-of-band interference which is generally removed by FIR filters. These filters are needed anyway to remove the side lobes due to rectangular pulse shaping. The filter causes new peaks (passband ripples) but still peak-to-average power ratio is reduced.
  • Correcting functions are applied to the OFDM signal where peaks are seen while keep out-of-band interference to a minimum. If many peaks are to be corrected, then entire signal has to be attenuated and therefore performance cannot be improved beyond a certain limit. A similar correction can be done by using a additive function (rather than multiplicative) with different results.

One of the problems of filtering an OFDM signal is the passband ripple. It is well-known in filter design theory that if we want to minimize this ripple, the number of taps on the filter should be increased. The trade-off is between performance and cost-complexity. A higher ripple leads to higher BER. Ripple has a worse effect in OFDM systems because some subcarriers get amplified and others get attenuated. One way to combat this is to equalize the SNR across all subcarriers using what is called digital pre-distortion (DPD). Applying DPD before filtering increases the signal power and hence out-of-band interference. The latter must be limited by using a higher attenuation outside the passband as compared to a system without predistortion. The sequence of operations at the transmitter would be as represented in Figure 2.

Figure 2: Typical OFDM Transmitter Chain
Typical OFDM Transmitter Chain

References:

  1. L.D. Kabulepa, OFDM Basics for Wireless Communications, Institute of Microelectronic Systems, Darmstadt University of Technology.
  2. Andreas F. Molisch (Editor), Wideband Wireless Digital Communications, Chapter 18; Pearson Education, 2001.

If you have been looking for an update of last month’s Mobile Monday Bangalore on this blog site and didn’t manage to find it, it’s because I was absent at the event. I had a family function to attend on the same day. So I was more than keen to attend yesterday’s event. Better still, it was at the Indiranagar Sangeet Sabha which is a spacious venue with good arrangements; and it is only ten minutes walk from my office.

The first piece of important information was that Forum Nokia is the first sponsor of Mobile Monday Bangalore on a long-term basis, starting from this month’s event. The more interesting aspect is that the first sponsorship money went towards providing Qualcomm a platform to present BREW to the MoMo community.

BREW (Binary Runtime Environment for Wireless) is an application development framework that provides developers a rich set of API for quick and easy development of mobile applications. For end users, the user experience is enhanced. When these two are met, the man-in-the-middle (the operator) stands to benefit as well. In fact, BREW enables operators to reach subscribers with a richer set of applications. The end result is a win-win situation for everyone.

What is the problem today? Rakesh Godhwani of Qualcomm pointed out that the network is ready, devices are ready but content is lagging far behind. With networks getting upgraded to HSPA and CDMA2000 EV-DO, bandwidth appears to be available. With handsets able to operate to full capability in such networks, the only thing that’s missing are the applications. In my opinion, this is a rather simplified view if not biased, but it is partially correct and the argument holds water.

Take the example of BSNL’s recent launch of CDMA2000 EV-DO. Someone announced that this service has been launched in a couple of circles in Kerala, not as handsets but as data cards. I don’t know much about EV-DO but I would expect it to exist with the same hype as HSPA where guaranteed high bandwidth to individual users is rare if it happens at all. It’s a shared bandwidth under non-ideal channel conditions only occasionally close to the base station. So what if applications are not available? Is the market ready? Are subscribers willing to pay? Is the pricing attractive? What’s the predicted change in consumption patterns of mobile subscribers? Are these subscribers changing on a social level when it comes to tele-interaction?

But the importance of applications cannot be underestimated. If not more important, applications are just as important as a subscriber’s choice of an operator or a handset. For VAS, what we are seeing is a fragmentation of device, technology and networks. It is perhaps only applications that have the ability to give subscribers a seamless experience across these diverse environments. The onus is therefore on the developer to develop applications that can work in more than one environment. A case in point is the fragmentation of the PC market between Windows and Linux. The choice there is obvious for developers but in the world of mobiles there is no obvious choice. Developers would have to consider Symbian, Linux, Windows Mobile, J2ME, BREW, Maemo or Android without being dismissive of any of them.

As for BREW, the case is strong. As of November 2006, BREW was being used by 69 global operators, by 45 device manufacturers and in 31 countries. Every CDMA mobile deployed in India supports BREW. CDMA taking up almost 30% of the Indian market, the outreach for developers for their application is no small number. The additional advantage is that price negotiation and revenue sharing is done between Qualcomm and the developer without involvement of the operator who is free to charge his premium to the end user. Having said that, other business models are also possible. Lucrative, yes. It also means that Qualcomm has to make the decision of pick-and-chose. Only applications that are unique and have a promise for the market will get a chance. It is something like writing a fiction novel. Publishers look for market value in conjunction with individuality of writing.

How does one entice the subscriber? Give a free trial for two weeks. Once he gets used to it, chances are he will buy it once his trail starts to expire. Getting new and exciting applications is one thing and using it is another. A successful application must be easy to download and install. The user interface must be elegant and intuitive. It must be attractive and useful. All these are challenges on a device that is so much smaller than a laptop monitor. In India, we are still a long way from getting there. Only 10% of revenue is from VAS, much of which comes from SMS-based services. This is where companies like Mango Technologies make a difference with solutions targetted towards low-end handsets and the cautious spender.

BREW doesn’t come on its own just for developers. There is an entire platform built around it for service delivery, billing, subscription and so forth. One such framework is uiOne whose software framework is captured in Figure 1 [1]. This enables easier rollout and maintenance of services on the carrier network.

Figure 1: uiOne Software Framework
uiOne Software Framework

Following Rakesh’s informal and interactive presentation, there was a short demo of an LBS application running of BREW. It was shown on a Motorola phone from Tata. My user experience was good but nothing out of the ordinary. Perhaps this is because I prefer to explore the environment on my own rather than let someone else tell me where the nearest restaurant is. In this demo, assisted-GPS was used which enables locating the subscriber indoors even without good satellite reception. This is because the access network sends satellite information to the mobile for location computation. In addition, Qualcomm employs many proprietory fallback mechanisms to locate a mobile. Once of these is called Advanced Follow Link Triangulation; there are six others. LBS is one of the promising applications but we are yet to see the “killer” LBS application. Point to note: developers are to tie up with map and GIS data providers on their own and Qualcomm is not involved in this at the moment. In fact, developing and deploying an LBS application is a challenge because it involves so many parties – operators, government (for privacy), map providers, developers, OEMs, chipset makers.

The philosophy of Qualcomm is of three parameters – innovation, partnership and execution. R&D spending is 20% profits. The end subscriber is kept in view but their main business is to license their technology to OEMs and operators. Thus, they say that Qualcomm has a high number of engineers (who innovate) and lawyers (who protect the licenses). Idea generation is an important activity in the company. Once a promising idea comes to the fore everyone brings it to fruition by taking it from being an idea towards making it a product.

The meeting ended with a short presentation by Forum Nokia. They talked about Symbian and its many components. They talked about Maemo, Widsets and FlashLite, about which I will write separately. This presentation, seen within the context of Qualcomm’s, highlighted that diversity in all aspects of the mobile world is here to stay. If we cannot agree, let us compete.

References:

  1. Personalizing Information Delivery with uiOne™, deliveryOne™, and the BREW Express™ Signature Solution; Qualcomm, 80-D7262-1 Rev. C, March 7, 2007.

Testing using TTCN-3

I had previously used TTCN-2 and I never really liked it. It appeared as a cumbersome language that forced you to do more work than needed. The resulting test code was neither intuitive nor elegant. Tools that existed did not enhance the experience in any way. Thankfully I never had to deal with TTCN-2 in great depth. The most I did was to improve lightly on existing test cases or give my inputs to test teams as and when required.

Part of this bias I guess comes from my lack of knowledge of TTCN-2. Last week, I attended a three-day training on TTCN-3. Now I know a great deal more about TTCN as a test language. I know the capability of TTCN-3 and the vast improvements it makes over TTCN-2. I am now all excited to design a test framework that relies on TTCN-3 for test execution, test interfaces and test case suites.

Definition

TTCN previously referred to Tree and Tabular Combined Notation. This was understandable because test cases were in tabular formats. They contained many levels of indentation that could be regarded a tree-like structure. With TTCN-3, the abbreviation refers to Testing and Test Control Notation. The focus is on testing and not really how those test cases are written. Yes, we can still write test cases in the old way of TTCN-2 but that’s not the only way.

Figure 1 gives an overview of TTCN-3 [1]. As we can see, test cases can be written directly in TTCN-3 core language (such a concept did not exist in TTCN-2), in tabular format or in graphical format. The standard also allows for newer presentations that could interface with the core language. For example, it’s perfectly valid for someone to write test cases in XML and have a conversion mechanism to the core language. Needless to say, an XML presentation format will remain proprietary with no tool support unless it gets standardized.

Figure 1: TTCN-3 OverviewTTCN-3 Overview

The second fact that becomes obvious from Figure 1 is that the core language interfaces with different other languages. These interfaces facilitate the reuse of existing data types and definitions that might have been defined in those languages. For example, UMTS RRC signalling definitions are in ASN.1. For the test engineer, there is no need to convert such definitions into TTCN-3. Any respectable tool in the market must be able to interface directly to these definitions and handle them seamlessly as part of TTCN-3 core language implementation.

Language

At this point it is appropriate to see what’s the format of TTCN-3 core language. This is nothing more than simple text with well-defined syntax and semantics. The syntax is defined using Backus-Naur Form. What this means is that any text editor can be used to write TTCN-3 test cases. Such test cases are quite different in dynamic behaviour from C or Pascal. Still, it is quite easy for programmers well-versed with procedural languages to get used to TTCN-3 easily. There are many similarities – keywords, data types, variables, control statements, functions, operators, operator precedence, just to name a few.

Looking at the differences between TTCN-2 and TTCN-3, Table 1 illustrates an important point with regard to indentation. In TTCN-2, many levels of indentation lead to poor code readability and excessive scrolling in editors. With each alternative, there is code duplication (S4) which can be solved only if S4 is implemented in a reusable test step. Alternatives in TTCN-3 are more elegantly specified and control flow continues at the same level of indentation. Even the example in Table 1 can be simplied by defining default alternative behaviour earlier.

Table 1: TTCN-2 vs TTCN-3 Statements
TTCN-2 vs TTCN-3 Statements

Having the core language in text also makes it easier to look at differences in a version control system. At run time, it makes debugging at the level of TTCN source a lot easier. This is important for test case developers. I have never known anyone who did any similar debugging at TTCN-2 source. The best I have seen was engineers setting intermediate verdicts at lots of places to ascertain what went wrong and where.

The language is structured in a way that allows high level of flexibility. Test system definition is modular. In fact, an important unit of a test suite is a module which would contain one or more test cases or the control part of a test suite. Concurrency of operation is possible because components can execute in parallel. Of course, execution is serialized at the level of hardware unless the multi-processors are involved. Parameterization is possible just as it was possible in TTCN-2. Concepts of PICS and PIXIT still apply because they are fundamental to any conformance testing.

 

Test System

Figure 2 represents the test system based on TTCN-3 [2]. The modularity of the design is apparent. Adapters are distinct from the executable. Test management and codecs are distinct entities that interface to the executable. More importantly, interfaces TCI and TRI are standardized so that users have a choice of easily migrating from one tool vendor to another without needing to rewrite the test cases. TTCN-3 Control Interface (TCI) allows for interfacing to codec (TCI-CD) and to test management (TCI-TM). Likewise, TTCN-3 Runtime Interface (TRI) interfaces to the adapters. This interface does the translation between the abstraction in TTCN-3 and the behaviour in runtime.

Figure 2: TTCN-3 Test System
TTCN-3 Test System

The adapters are implemented in ANSI C or Java, which have been included in the standard. TTCN-3 allows for dynamic mapping of communication channels between the TTCN-3 executable and the adapters. This is one more area in which TTCN-3 does it better than TTCN-2 where such mapping was static.

Typical Test Cycle

The following would be present in a typical test cycle:

  • Implement the adapters in a chosen language (done only once per adapter per language of choice)
  • Implement the encoders/decoders in a chosen language (done only once per language of choice)
  • Implement the test cases in TTCN-3 (done only once)
  • Compile the test case and test suite (done only once unless test cases change) – at this stage an executable is formed from the abstract definitions
  • Link with adapters, codecs and test management (varies with tool implementation: may be a static link, runtime loading of library or inter-process communication)
  • Execute the test suite (debug if necessary)
  • Collate test results and correct the IUT (Implementation Under Test) if errors are seen

Tools

I have previously used tools from Telelogic but never really liked their GUI. Their tools have generally been the least user-friendly in my opinion. I hear from others who have evaluated their TTCN-3 support that they are now better. Telelogic is not doing just TTCN-3. They do a whole of things and I think their strength in TTCN-3 is not all that obvious.

Recently I evaluated TTWorkbench from Testing Technologies. It’s an excellent tool – easy to install and easy to use. It has good debugging support. It allows for test case writing in graphical format (GFT) and looking at logs in the same format. Naturally it also allows writing of test cases in core language format. The downside of this tool is that I found it to be slow in loading and building test suites. It uses Eclipse IDE.

Next I evaluated OpenTTCN. “Open” refers to openness of its interfaces which conform to open standards. This allows the tool to be integrated easily to other platforms using standardized TCI and TRI. With this focus, the tool claims to conform rigidly to all requirements of TTCN-3 standards. Execution is generally faster than other tools in the market. The company that makes this makes only this. Nearly 14 years of experience has gone into making this product and the execution environment is claimed to be the best. The downside is that the main user interface is primitive command line interface. There is no support for GFT although this is expected to arrive by the end of the year. Likewise, debugging capabilities are in development phase and are expected to be rolled out sometime this year. OpenTTCN also relies on certain free tools such as TRex that is the front-end editor with support for TTCN-3 syntax checking. This too is based on Eclipse.

This is just a sample. There are lots of other tools out there. Some are free with limited capability and others are downright expensive. Some are proprietory. One example in this regard is the General Test Runner (GTR), a tool used in Huawei Technologies.

Conclusion

TTCN-3 is set to become a major language for formal test methodology. WiMAX is using it. SIP tests have been specified in TTCN-3. LTE is starting to use it. Other telecommunications standards are using it as well and its use has split over to other sectors. The automotive sector is embracing it. AutoSAR is using it and these test cases may be available quite soon this year. The official website of TTCN-3 is full of success stories.

It is not just for conformance testing like its predecessor. Its beginning to be used for module testing, development testing, regression testing, reliability testing, performance testing and integration testing. TTCN-3 will work with TTCN-2 for some time to come but for all new test environments it will most likely replace TTCN-2 as the language of choice.

References

  1. Jens Grabowski et al., An Introduction into the Testing and Test Control Notation (TTCN-3).
  2. Mario Schünemann et al., Improving Test Software using TTCN-3, GMD Report 153, GMD 2001.

Some Venn Diagrams

Different perspectives exist for the same thing. While no two people look at the world in exactly the same way, two people when they collaborate look at the world in a completely new perspective that borrows from individual perspectives. I got the idea of using colours in Venn Diagrams to represent this. In this post, I present some applications of this idea.

Figure 1: Success of a Team Project
Success of a Team Project

 

 

 

Figure 2: Job Satisfaction and Effectiveness
Job Satisfaction and Effectiveness

 

Figure 3: The Market Place
The Market Place

 

Figure 4: Today’s Mobile World
Today’s Mobile World

 

Figure 5: Evolution of the Market Place
Evolution of the Market Place

Entrepreneurship in India

Starting-up

I have not made a post to this blog for a long time. Needless to say, I have been busy with lots of things. I recently joined an Indian startup. I had previously thought of starting my own company. After considering such a proposition from all angles and recognizing my own strengths and weaknesses, I decided to abandon it. I don’t have that many contacts in India. I have never worked in India in all my career and I am new to how things are done here. Last of all, I kept questioning my motives for starting a company. The motives were not that obvious and certainly not healthy for the business.

What’s the next best thing, or perhaps even a better thing, to starting your own company? Join a start-up. That’s exactly what I have done. I started in my new job in the new year and I have been busy ever since. Today I finally decided to put some time to make this post. Choosing a start-up is not exactly an easy thing to do. You have to consider the pay packet which may not be up to market rates. You have to consider the working environment and the general lack of facitilies that is taken for granted in big corporations. You have to forgo medical benefits and insurance.

The real joy in a start-up is obvious to anyone who has worked in big corporations. There is a personal touch to everything. You know everyone in the company. The structure is flat. Rules are few and flexible. You take responsibility and make things happen. You generate ideas and take initiative. If the company has to grow, your actions matter. If the company grows, you grow along with it. Ultimately, you may not be the CEO or Founder, but you feel that you belong to the company and the company (part of it anyway) belongs to you.

Entrepreneurship

Starting a company is all about entrepreneurship. The common notion is that people don’t want to work for others. They want to do something on their own, be their own boss. On the contrary, I have a holistic take on this. The founder of a company works for and with everyone around him. His company works for the industry in which it operates. Market forces are always keeping his business vigilant. He may have some control over suppliers and customers but ultimately the market dictates his actions. The beauty is that as an entrepreneur you are in the driving seat. You decide how best to chart the progress of your company that aims to grow along with the market.

This appetite was long absent in India. People worked for rajas and maharajas. Then they worked for the British Raj. There was little innovation, if any at all. India is a country proud of its culture, customs and tradition. Anything new was seen as an invasion into age-old customs and tradition. Why change when there was no need to? Everyone was happy running their businesses on a small and medium scale, in the same way for many decades. Entrepreneurship is not about just running your own company. It’s about generating ideas, innovating and driving the industry to new heights.

There are lots of small Indian start-ups today. Some will succeed, many won’t. Not all are run by entrepreneurs in the true sense. Though all have some degree of innovation, in a competitive market only the best will survive. Some will grow to become big players while others will find their niche in the market. Many others will just dwindle and close. An example is the growing number of start-ups getting into social networking sites for the Indian market with very little competitive advantage [Business Week, November 30, 2007]. It is not an easy job. Technical innovation is one thing. It has to be backed by keen business acumen which involves a whole suite of decisions – how to keep costs low, is this the right time to release the product, what’s the best business model, is it better to import or make it indigenously, what’s the right price for the offered quality.

Today we are seeing more and more Indians breaking their long-held comfort zones and foraging into entrepreneurship. It’s part of the development of the Indian psyche which no longer sees itself as being repressed by British colonialism, princes and maharajas, or self-constraining customs. Independent India wallowed for half a century in dirty politics and ineffective governments. With the liberalization of the Indian economy, Indians may not yet be innovating for global impact but at least we are building and improving on ideas that are coming to us from rest of the world. Economy has firmly put India on the path of progress and for the first time governments, corrupt though they may remain, are forced to take note and follow.

Economy

Very few industries actually grow during a recession. The best period for starting a company is when the economy is growing or already in a boom. This perhaps is the most important factor why there are so many Indian companies doing so well. Even those that are not doing well, have the potential. In a growing market and a bullish economy, the potential is always high. In the current financial year Indian economy is set to grow at 8.7% [Business Week, February 7, 2008].

The potential for growth is enormous in many sectors. It is hard to see how, when we have come so far, the Indian economy can go into a recession. It is not likely to happen for another five straight years. A bolder prediction is that the economy will grow at 8% until 2020 [Financial Times, January 24, 2007]. The current growth has disproved previous predictions which put the long-term growth rate at 5%.

Global recession, if and when it happens, will affect India. BPOs, call-centres and software services industry will be affected. The appreciation of rupee is set to continue and will squeeze profit margins. Though India has sufficient resources for supply and potential for demand on its own, it is not self-sufficient. If India has to grow in the face of a global recession there must technological innovation and improved level of self-sufficiency.

Ecosystem

Ecosystem plays a key role towards self-sufficiency. Take the case of manufacturing a mobile phone. While Nokia has a high capacity plant near Chennai – into which it has just pumped $75 million investment [Nokia, December 5, 2007] – key components that go into the phone are not made in India. The phone may have chips from Texas Instruments or STMicroelectronics who prefer to have their fabs in China than in India. In fact, though India has been in the semiconductor for a good three decades most of India’s contribution has been towards software development and hardware design. Only in 2009 Hyderabad is slated to get India’s first fab.

If Indian companies have to compete against big foreign players they need to source indigenous components to keep costs low. They need to make the best of local knowledge for local needs. Their leverage against foreign competitors must come from their understanding of culture and consumption patterns.

In the ideal case, everything that goes into making a product is made in India. In other words, the entire value chain creation happens in India. All businesses along the value chain will stand to benefit. Creating such a value chain needs entrepreneurship at all levels. Some will focus on making the mobile phone. Others will focus on making RF tranceivers. Others will focus on baseband and modem part of the phone. Others will supply the protocol software. Others will supply test equipment. Others will play a vital role in representing India in world organizations so that when the time comes to certify the phone, there will be Indian companies certified to do so.

But making everything in India is not truly an ideal case for the simple reason that India will not have the economic advantage of making everything. India would have to specialize on what it does best and most efficiently. However, in the short-term the cost of imports and the hard affordability of foreign products make it worthwhile to look for substitutes within India. The main difficulty at this point is that Indians do not possess specialized technology. For example, no Indian company makes a mobile phone.

The future is promising. Indians returning from overseas are spawning ventures at all levels. While many may be looking at only services there are others who are taking to products. There are fabless semiconductor companies that do specialized design for silicon. There are companies starting to make RF components. There used to be a time when specialized products had to be imported directly from a foreign manufacturer. Today local distributors have taken shape. They stock various products, give faster quotes and shipments without businesses worrying too much about import duties and clearances.

Funding

An extension of the ecosystem is the availability of funds. Getting funds has never been easier. Getting funds has never been more difficult. Both statements are true. While opportunities are there, investors are choosy and demand is great. No two investors look at business potential in exactly the same way. The difficult task is to find the right investor who understands your industry and meets the needs of your business. Finding the right investor also means being able to retain control over your business. Investors want returns. They do not want to take over your business. So long as you can deliver, they are happy. If you can’t, they will find a way to sell your business for a profit.
Why do people want to invest in Indian start-ups? The potential is high. The market is huge. The ecosystem is growing. Everyone’s expectations are aligned and tend to reinforce a belief in growth. People’s spending power is increasing. Consumption patterns are changing and growing. With new ideas coming up all the time, there is that chance that a business will have global value as well.

I was at a recent event in Bangalore on 19th January 2008. The event was named HeadStart and took place over three days. It brought together venture capitalists and start-ups. The event consisted of demos and presentations. The VCs sat on a panel and talked about what they looked for in start-ups – a clear business case, competitive advantage, market research, solid financial projections, break-even analysis and a host of other things that will be familiar to anyone who has done a proper business plan. Investors are not interested in your technological breakthrough per se. They want to know why you are doing it, what is its value, who is going to buy it and why.

I also attend Mobile Monday Bangalore on a regular basis. Some of the sessions are attended by investors to get insights into start-ups and the technologies concerned.

Competition for funds is intense at the moment. Company valuations also tend to exhibit great variance. There is no standard set in stone. Only experience, insights and expectations of the future. If there is a standard, it cannot apply to all industries. Valuation of a company in the communications sector will be done differently from another company in the pharmaceuticals sector. The broad factors may be the same but the emphases are likely to be different.

Funding is not just about private investors. It is about making innovation possible which translates to upfront investment. NASSCOM has taken a step in this direction by creating an Innovation Fund. The process is in place since January this year and the Rs. 100 crore fund may be operational anytime soon. Key investors are ICICI Group, Tata Consultancy Services (TCS) and Bharti Airtel. Start-ups can hope that they will not make aggressive demands on returns like VCs.

The government is ahead of NASSCOM. Start-up companies are beginning to take advantage of a scheme called Support International Patent Protection in Electronics & IT (SIP-EIT) under the Ministry of Communications and Information Technology. Under this scheme, the government will bear a maximum of Rs. 15 lakhs for every international patent application. It is a highly useful support for companies wanting to protect their IP.

Community

While there is competition among start-ups they also form a community in which ideas are shared and nurtured. Such communities serve the additional role of promoting the company and its product or service. Meetings and seminars provide platforms to enable this. Partnerships may be formed not just between companies but between individuals. New talents may be recruited and industry contacts built. Some of these events are attended by well-established and large corportations who hate to be left in the dark.

Mobile Monday Bangalore is one I have often written about. Open Coffee Club Bangalore is another community that meets regularly. BarCamp Bangalore (BCB) is another forum for sharing ideas. DevCamp Bangalore is a new event that is happening today as I write this article. DevCamp focuses on technical issues of interest to developers. I have already mentioned HeadStart. Proto.in which concluded recently in Chennai was an event very much like HeadStart. The whole of last week was the Entrepreneurship Week at the International School of Business and Media in Bangalore. It was an effort at bringing industry and academia together to forge effective partnership. It acknowledges that innovation is key in the long run and academic institutions can fill this void.

Recently, NDTV aired a live debate that took place in IIT Delhi. The debate brought together students, Indian start-ups, British start-ups, Indian government officials and their British counterparts including Prime Minister Gordon Brown. The debate addressed a number of issues but at the heart of it, it sought to build a platform for greater cooperation between small and medium business in the two countries. There are some things that Britain does best but they are others that Indians are doing better as entrepreneurship in India continues to mature and grow. Events are this can be considered as part of the overall ecosystem that nurtures start-ups.

Conclusion

One of the greatest changes that happened in economic history is the Industrial Revolution, particularly in Britain. A number of things came together to make this revolution possible – availability of capital and risk-taking investors, a banking system, the Enclosure Act and availability of labour, the import of raw materials from the colonies, ready market, political stability, technological innovation, the power of steam and above all entrepreneurship. We may not see anything like a revolution in India but conditions are right for Indian start-ups to venture boldly.

An Introduction to GIS

The motivation for this post is simply the growing importance of Location-Based Services (LBS) in the mobile environment. To provide such services knowledge of location and topography is needed. Services will also benefit from knowing the proximity to complementary services, routes and obstacles. All such information come from GIS. If someone – as a developer or provider – is attempting to get into LBS space, it is vital to understand GIS first. This post is a brief introduction to GIS as I have understood it. I am by no means an expert in this area and readers may wish to do further research on their own. Feel free to add your comments to this post.

Definition
There are two definitions of GIS – Geographical Information System and Geographical Information Science. As a system, the focus is on methods and processes. It relates to tools, software and hardware. It defines how data is input, stored, analyzed and displayed. As a science, it deals with the theory behind the system. It asks and answers why something should be done in a certain way. It looks for solutions to technical and theoretical problems. It challenges existing methodologies and models. For example, as a system there may a defined way for data transformation but as a science justifications are needed why such a transformation is correct and applicable.

Historically, there has been no consensus within the fraternity but it is being increasingly felt that GIS is both a science and a system. One cannot survive without the other and both have their place. More than this, the understanding and use of GIS has evolved over time. In the early days it was no more than an automated mapping system. It meant the use of computers to make maps in a cost-effective manner. Perceptions changed quickly as people realized that more can be gleaned from maps than the information available from map data. Visualization brought new possibilities and the idea of spatial analysis was born. Such spatial analysis described how data is to be combined, what questions need to be asked for specific problems and what solutions could be sought by means of spatial relationships. The human eye and the brain can perceive patterns that are not obvious from data described as tables and spreadsheets.

As data became pervasive, the quantitative revolution came to the fore. Number crunching or data intensive processing as they came to known in computing lingo became popular. GIS may not have given impetus to quantitative analysis but it surely made it important. In turn, GIS rode on improvements that happened to quantitative analysis. Nonetheless, students studying GIS are sometimes blamed for seeing GIS as nothing more than quantitative analysis. The fact is that qualitative analysis is an important aspect of GIS. This is what separates the notions of system and science. Intuition and spatial analysis are still the primary drivers for GIS. GIS research is much more than just numbers and quantitative analysis. Figure 1 gives a snapshot of the breadth of analysis that happens in GIS [1].

Figure 1: Content analysis of GIS journals
from 1995-2001 (based on 566 papers)
GIS Research

Applications
At this point it would be apt to know the applications that use GIS. The real value for most people is not GIS itself, but rather how and where it is used. It has already been mentioned that in the domain of mobile communications, GIS enables LBS. Traditionally, it used to be used only within geography – to identify, record or classify urban areas, rural areas, forest cover, soil type, course of rivers, to mention a few. These days it is used in many other fields. It can be used by city planners to aid decision making. For example, what’s the best route to lay a road between two points through an urban landscape without going underground? GIS can give answer to such a question. GIS can help in social analysis. If women in a certain area have a higher incidence of breast cancer, GIS data of various contributing factors can be combined and analyzed to arrive at an answer or a set of possible answers. For transportation and service delivery, GIS is an important tool to plan routes, profile markets and determine pricing models. E-governance uses GIS for property survey lines and tax assessments.

I will give an example of where GIS could be useful with reference to Tesco stores in the UK. I noted consistently that Tesco stocks a variety of Indian consumer products in outlets that are close to Indian communities. Two Tesco outlets in different locations often have quite different items. I don’t know how this happened but my guess is that Tesco learnt by experience. Sometimes this intelligent differentiation is missing in outlets. However, if GIS had been used, such stores could stock goods focused to ethic community groups from the outset. There is no learning period needed. Provided decision makers ask the right questions and know how best to use GIS data, Tesco could predict consumer behaviour patterns in a specific area even before it has opened its outlet there.

Approaches and Identities
It will be apparent from the diversity of applications that GIS does not mean the same thing to two people. Cartographers, sociologists, city planners, environmentalists, geologists and scientists, could all potentially look at GIS differently. Let us take the example of mapping a forest area. Wildlife enthusiasts would map the forest cover with emphasis on habitat and conservation. They would consider how much light reaches the forest floor for the undergrowth to survive. On the other hand, forest officials more concerned with the health of trees would focus on height and width of trees. They would consider different types of trees and forest canopy. If it is a commercial forest, loggers would be more concerned with factors associated with their business.

The point about GIS is that data is just a representation of reality. The same reality can be seen differently by different people and they all can be true at the same time. This is somewhat like painting. Two painters can see the same scene in quite different ways. It is said that painting is all about making choices – what details to include, what to leave out and what to emphasize. No one painted olive trees like Van Gogh yet his trees are every bit as real as the post-Romantic Realism of Courbet.

The terms used to describe this are epistemology and ontology. Epistemology is the perspective through which reality is seen. It is sort of a lens that notices some things and filters out the rest. Ontology is refers to the reality. They exist on their own but they are interpreted through epistemology. The reality one sees could be different from another simply because their perspectives are different. Without going into details, different epistemologies have been discussed in literature – social constructivism, positivism, realism, pragmatism. For example, positivism believes in observations that can be repeated before deriving a theory out of it. Realism emphasizes more on specific conditions and particular events. Ultimately, these approaches straddle the divide between GIS being a system and a science.

Governments for example may apply a certain epistemology to suit their purpose. The resulting ontology may not necessarily be the way people see themselves. Thus, Public Participation in GIS (PPGIS) has become important for communities to challenge governments by defining ontologies that they believe is real or at least more real.

For computers, ontology is simply the way data is stored, objects are represented or object relationships are described. For example, a bridge across a road can be underground or above the road. Such relationships are defined and this represents reality for a computer. Data is not everything but they are a key component of GIS.

Handling Data
This is a complex area in itself. There is data collection, classification, interpretation and representation. Broadly, there is raster data and vector data. With raster data, geographical area is divided into units or cells and attributes are set for each of these units. Raster data can be easily transformed and combined. Handling raster data is simple. This is not the case with vector data in which the basic components are points, lines and polygons. A geographical area is described from these components. Both these are means to describe an entire area without any gaps. Generally these are called field models or layer models. There are also object models in which objects are represented within an area but the area in its entirety has not been mapped. Thus object models may have many gaps which may not be significant for the purpose for which these maps have been generated.

Scaling of data is regarded as a difficult activity that involves lots of decision making. At 1:25000 roads, bridges and towers may be clear in an urban area. At 1:75000 such fine details may be lost. The problem is how to aggregate data, classify them correctly and represent them at the scale of 1:75000. It all depends on the context. If a contractor tasked to maintain bridges is looking at such a map, he should probably see bridges even at the scale of 1:75000.

Data collection for a specific purpose is an expensive job. Thus it becomes necessary to share and combine data from multiple sources. The problem with combining data is that each source has collected it for a specific purpose. One source collecting tree data may classify all trees taller than 50 meters as tall. Another source may use a different criterion. If the actual height has not been recorded it becomes difficult to combine the two sets of data and come up with a consistent ontology of tall trees in a forest area. On the flip side, different data sets may be representing the same objects but may use different terminology. For example, a “limited access road” in one set may be same as a “secondary road” in another set. Only with the help of local knowledge we would realize that they are talking about the same roads. Then, the two data sets can be usefully combined. Data semantics varies and it needs to be understood well to make the best use of data. We ought to realize that data offer particular points of view, not an absolute reality. In this context, primary data is one that is collected for a specific purpose. Secondary data is one that is used in GIS but was collected for a different purpose.

Attempt has been made to standardize data so that data can be merged more consistently. Metadata is used to facilitate this. Metadata are descriptors of data. They record how data was collected, when it was collected, at what scale is it applicable, what classification was used and many other things. Metadata is a good step forward but it does not entirely solve the problem of dissimilar data collected differently for different purposes. With metadata, we at least have more information about available data so that we can use them as appropriate. This is really a critical part of GIS these days as data is shared widely. Combining data without understanding the science behind it could lead to inaccurate analysis that builds a conclusion divergent from reality.

Modelling and Analysis
With so much data available, models help us build a picture of the world and its realities. Analysis follows as a necessary step in understanding this reality. Overlay technique and analysis is a fundamental approach. An area can be seen from the perspective of many layers, each of which is overlaid on top of another. Bringing together spatially different data sets can assist in solving problems and answering questions. Schuurman [1] quotes the example of identifying population areas that are at risk of fires in Southern California. Population is on one layer. Rivers which help break the spread of fires in on another layer. Road networks is on another layer that relate to accessibility and user location. Tree or forest cover is yet another layer that relates to spread of fires. This can get further complicated if we bring in local weather patterns and wind directions on another layer. Overlay technique is easily done with raster data but much more complex with polygon data. For computation, overlay uses set theory.

Another example is environmental modelling that could be useful for studying levels of pollution and areas at risk. Air emission is modelled. Noise is modelled. These models are based on factors which might be available as GIS data. Contours from these models are generated to highlight patterns of noise or air pollution. These are then converted into GIS data. The next step is to overlay all this GIS data and visualize the overall impact on the environment in a particular area. Such use of GIS helps in decision making. Thus, GIS today combines visualization as well as quantitative approaches to problem solving.

Decision making exists even in the process of using GIS data. Often many areas are incompletely mapped while others may be complete. Representing all the data on a single map is inaccurate. Thus decision has to be made to bring all data to a common platform of comparison. Data reduction enables one to do this. Likewise, a project attempting to model and analyze something with an accuracy of 50 meters may not be possible for reasons of privacy. One example of this is when working with individual health data. Some process of data averging over a wider area must be used. Spatial boundary definitions present their own problems. GIS likes crisp boundaries but this is never achievable in reality. Scales are different. Classification criteria are different. National data is collected for a different purpose at a different scale than taluk data. Combining the two is not exactly a trivial job. This is named the modifiable area unit problem (MAUP). MAUP deals with aggregation of data from different scales or redrawing of the same map at a different scale.

Conclusion
GIS is an interesting field that has many practical uses. It is more than just data collected for a particular location. It is a science. It is a system. In a future post, I will look at the use of GIS specifically for LBS. From what I have learnt of GIS, a truly powerful LBS application will do a lot more than just feeding users with data based on their location.

References

  1. Nadine Schuurman, GIS: A Short Introduction, Blackwell Publishing, 2004.