Friday, October 26, 2007

THE UNSHUFFLE ALGORITHM

----------------------------------
By Art S. Kagel


One of the most common program utilities is the sort. And one of the most common applications of a sort utility is to sort a list of data items that may already be well-ordered or indeed, sorted.

Sort algorithms are optimal for some data classes while exhibiting worst-case behavior for other data classes. Most sort algorithms do not handle it no better than random data, and a few actually perform slightly better when the data to be sorted is random.

An algorithm that deals efficiently with well-ordered data is useful in many programming situations, such as sorting a user input list of similar items. People tend to enter a list of related items in a logical, though usually imperfect, order. If asked to enter a list of U.S. presidents, for example, most people will either enter them in chronological or alphabetical order, with some “errors” (for example, RONAL Reagan might be listed first in either case since he is the current president and would probably come to many peoples’ mind first.)

Another major application for such an algorithm would be to repeatedly sort a list to which disorder may or may not be introduced from time to time. An example of would be a adding to the end of an already sorted list.

In developing the algorithm shown in Listing 1, I sought to create a method for sorting well-ordered data. What evolved was an extremely fast and efficient algorithm for data that is either well-ordered, well-ordered but reversed, or partially ordered. It compares favorably with the performance of other algorithms when applied to random data.

I have termed this algorithm the unshuffle algorithm (as opposed to sort) because of the principle behind it. My basic premise was to find an algorithm intelligent enough to know when a list is well-ordered or already sorted. (This concept is, of course, in the realm of artificial intelligence; indeed, my goal was not reached entirely.) I began by examining the nature of unordered lists with the idea that if I could determine how they become unordered, I could write an algorithm to efficiently reorder them.

Shuffling the deck

Think of a sorted list as a deck of cards, all one suit, in ascending order. If one wanted to create a condition of fairly random disorder within this deck, one might cut the deck into two or more subdecks and shuffle them back together. The resulting deck would appear to be randomly ordered (or rather, disordered).

In reality, each of the subdecks is still sorted after this first shuffle; the cards are simply intermingled. Undoing this shuffle would be a relatively simple problem to program. Of course, after several such shuffles the situation becomes more complex, and the problem of unshuffling the deck becomes equally complex but, one would hope, still manageable.

You might go about undoing the shuffle by turning the cards over one at a time onto piles. You could put a card on top of a pile if its value is greater than the top card on an existing pile or start a new pile if it is less. The result when you finished would be several sorted piles of cards (unless the deck was already sorted, in which case the first pile would be holding all the cards.)

To get back to one sorted deck, you might select the largest card showing on any pile and turn it over as bottom card on the final deck, then select the largest card still showing and turn it over on top of the first one, continuing in this manner until all the cards had been merged back into the original deck in sorted order.

This method of undoing the shuffle is the essence of the unshuffle algorithm. The algorithm tries to put items either on the top or the bottom of a pile before testing against the next pile. It then merges the piles one at a time into the first pile using a modification of the classic two pile merge.

Distribution and merge

This algorithm is more efficient for a list that is already well-ordered (like the first- order shuffle) because unlike a sort (which tries to introduce order into the disorderly world) the unshuffle algorithm tries to extract what order it can find. The algorithm simply goes about distributing cards onto piles until there are no more in the deck. It then performs a simple merge of the piles. If the data is orderly there will be fewer piles and both the distribution phase and merge phase will run more efficiently, making fewer comparisons.

Most sort algorithms need to complete their tasks even if the data is completely sorted. A classic shell sort, for example, would need to make at least N/I passes (where I is the number of initial partitions—see D. Knuth’s The Art of Computer Programming, vol. 3, page 84ff), even over an already sorted list of N elements, making from N/K(1) to N-1 comparisons (where K(i)= K(i-1)/I and K(1)=N/I) on each pass.

The unshuffle algorithm can perform its distribution phase in just one pass with from N-1 to N**2/2 comparisons, while the merge phase performs from N-4 to N**2/8 comparisons. In addition, the algorithm gains efficiency over other sorts because it never exchanges two elements and never inserts an element between two others—both expensive operations. It is also space efficient since relatively few piles are created (except in the worst case, where there are N/2 piles).

The distribution phase of the unshuffle algorithm exhibits its worst-case behavior under the rare circumstance that the smallest item is followed by the largest, which is followed by the next smallest followed by the next largest, etc. (for example, 1, 10, 2, 9, 3, 8, 4, 7, 5, 6). The order of the distribution phase in this case becomes ((N**2)-1)/2, or approximately N**2.

Fortunately, this class of data also represents the best general case for the merge phase of the algorithm, which in this instance is of order N. The best case for the distribution occurs when the list to be operated on is perfectly ordered but reversed (such as 10, 9, … 2, 1). In this case the order of the distribution phase of the algorithm becomes (N-1), approximately N.

For a perfectly ordered list the order list the order is (N-1)*2, which is also proportional to N. (The order of the sorted and sorted-but-reversed cases could be exchanged by reversing the sequence in which the next element is tested against the top and bottom elements of each pile.) The order of the merge phase in the perfectly ordered cases is zero, as only one pile is generated and there is nothing to merge.

The merge phase of the unshuffle algorithm takes advantage of a property of the piles to minimize the number of comparisons needed and to eliminate unnecessary code. This property is a result of the logic used during the distribution phase. In a simple merge of two lists (see The Art of Computer Programming, vol. 3, page 159f) containing M and N elements, M+N-1 comparisons (at most) would be necessary. On the other hand, three lists of M, N and L elements merged in pairs would take at most 2*(M+N)+L-2 comparisons.

In general, the order of a pairwise merge of P equal piles is (N/(P-1)*(P*(P+1)/2))/2. But due to certain properties of the distribution onto piles, the merge phase of the unshuffle algorithm will need at most N*(P-1)-2*(P-1)**2 comparisons when all piles contain two elements except the first (where P is the number of piles).

When all piles have two elements except the last, the merge phase exhibits its best general case behavior, performing N-2 comparisons. At its worst (when P=N/4+1), this general equation is marginally better than a general merge (one with no knowledge). At its best (when P=N/2, where only N-4 comparisons are performed), it is considerably better.

The distribution of the elements of the original list onto piles guarantees that the smallest element will be on top of the first pile created and the largest element will be on the bottom of the first pile. Each pile is wholly contained between the first and the last elements of the pile created before it. Therefore, when merging the Rth pile into the list that resulted from merging the first through (R-1)th piles, it is not necessary to compare elements in the Rth pile to any element in the previously merged list that is smaller than the element that was originally the smallest on the (R-1)th pile.

In addition, the pile rather than the previously merged list as long as you begin the merge with the first pile and end it with the last pile. This process eliminates the complex logic needed to keep track of which pile will become empty first.

Unshuffle in action

Table 1 (not available here)presents comparative timings that resulted from an implementation of the unshuffle algorithm and a similar implementation of shell sort. The implementation was written in MS Pascal language constructs and MS library procedure calls. Since it was designed as a library utility it is generally useful and has user- supplied procedures (passed as parameters) for comparisons and managing data items. It is therefore independent of the data item’s storage structure and type.

However, this implementation is somewhat slower than a pure implementation of the algorithm optimized for a specific data structure, due to procedure call overhead. The user-supplied comparison procedure looks at one element in relation two others to determine if it is less than or equal to the first, greater than or equal to the second, or somewhere between the two. This procedure improves performance in all cases except the well-ordered-but-reversed.

The shell sort algorithm used is a similar utility that also has user-supplied procedures for comparisons and data item management. The shell sort uses the K(i)=K(i-1)/I, K(1)=N/I, and I=2 algorithm; where K is the interval of comparison and I is the number of partitions for the first pass. It is a good implementation that is considerably more efficient for well-ordered data than random data.

The timings were run on an IBM PC/AT with word data stored in a singly linked list allocated on the long heap. Both algorithms were called to sort identical lists created using a random number generator (values were in the range 0…64K). Both were then asked to resort their output to acquire the timings for presorted data. Times were taken from the system clock immediately before and after the calls to the respective sort procedures.

It is obvious from the timings in the Table 1 that for already-sorted data the order of the sort is approximately N. This variance from the expected order of ~2N is due to the implementation that uses a single procedure call to compare an element to two others. It is a savings of procedure call overhead rather than algorithmic efficiency.

(Note from the timings that the order of the unshuffle algorithm is apparently 3N for the pseudorandom data that was generated. I have not attempted to determine the theoretical order of the algorithm for the random case and would welcome any reader input on the subject.)

Listing 1 shows the unshuffle algorithm in words. Listing 2 is a pure implementation of the algorithm in MS Pascal for integer data stored on the heap in a singly linked list. A third listing (which shows the generalized implementation used for the timings in Table 1) can be found on the COMPUTER LANGUAGE Bulletin board Service and CompuServe Forum.

I should note that the un shuffle algorithm does have one restriction: you can only create implementations of it for sorting a linked list containing either data elements or pointers to data elements, including array indices and file positions. I have not yet been able to write an implementation that can sort arrays directly.







-----------------------------------------------------------------------------------

Art Kagel has over five years’ experience as an analyst and programmer developing applications for media market research firms and the insurance industry. He is currently a consultant with Business Logic Inc., New York, N.Y., a consulting firm providing computerized solutions to business problems.
(This info was written around the 80s)

(I found this article interesting and so I published. It is taken from a very very old magazine about computers(forgot the name)...around the 80s)

PNG Image creation and kernel operation c code




/*************************************************************************************

=>THE FOLLOWING PROGRAM CREATES AN IMAGE OF 128 X 64 RESOLUTION PNG IMAGE AND

=>THEN CREATES A TWO DIMENSIONAL PIXEL ARRAY BY READING FROM THE IMAGE FILE.

=>IT THEN APPLIES A LOW PASS FILTER USING A 3 X 3 KERNEL.

=>FINALLY IT APPLIES NOISE TO THE FILTERED IMAGE USING RANDOM NUMBERS

In ImageFilter.h header file, there are functions to create pixel array and filter image
using a low pass filter.

FUNCTION NAMES: void CreatePixelArray();
void LowPassFilter(int kernel[K][K];

*************************************************************************************/


#include "ImageFilter.h"


/*int kernel[K][K]={ {1, 1, 1},
{1, 1, 1},
{1, 1, 1}
};*/


int kernel[K][K]={ {1, 4, 1},
{4, 8, 4},
{1, 4, 1}
};
int main(){

int num=0;
/*IMAGE CREATION*/
CreateImage();


/*READING FROM IMAGE FILE AND CREATION OF ORIGINAL PIXEL ARRAY*/

CreatePixelArray();


/*LOW PASS FILTER USING 3 X 3 KERNEL (SPATIAL OPERATION)*/

LowPassFilter(kernel);

/*APPLYING NOISE TO EACH PIXEL RANDOMLY USING GENERATED RANDOM NUMBERS WITHIN 1 TO 10*/


/*AddNoise(num);*/
AddNoise();

return 0;
}




/*HEADER FILE FOR LOW PASS FILTER OPERATION USING 3 X 3 KERNEL AND CREATE PIXEL ARRAY*/



#include
#include
#include
#include


#define HEADER "P2\r\n128 64\r\n255\r\n"
#define pi 3.1416
#define M 128
#define N 64
#define MAX 10
#define K 3
#define MAX1 10
#define MAX2 100

int pixel_array[N][M],sum=0;
char char_array[MAX];
char ch[]=HEADER;
int array[K][K];

int unique_rand_int[MAX1];



/*CREATE IMAGE*/
void CreateImage()
{
int num=0;
double f=0.00;
int m=0,n=0;
char str[6];

FILE *S_STREAM;



S_STREAM=fopen("out2.pgm", "w");


fprintf(S_STREAM,"%s",ch);

for(n=0;n {
for(m=0;m {
f=(float)((127.0*sin(2*pi*m/128.0)*sin(pi*n/64.0))+127.0);
sprintf(str,"%.0f",f);
num=atoi(str);
fprintf(S_STREAM,"%d",num);
fprintf(S_STREAM," ");
}
fprintf(S_STREAM,"\n");

}

fclose(S_STREAM);


}



/*CREATE PIXEL ARRAY*/


void CreatePixelArray()
{
FILE *R_STREAM;
char c='a';
int newline=0,x=0;
int flag=0,num=0,i=0,j=0;
R_STREAM=fopen("out2.pgm","a+");

while(!feof(R_STREAM)){

c=fgetc(R_STREAM);

if(c=='\n')
{
newline++;
if(newline>3)
{
j++;
i=0;
}
}

if( (newline==3) && (flag==0) )
{
flag=1;

}

if(flag==1)
{


if(isdigit(c))
{

char_array[x]=c;
x++;

}
if(c==' ')
{
char_array[x]='\0';
num=atoi(char_array);
pixel_array[j][i]=num;
i++;

for(x=0;x {
char_array[x]='\0';
}

x=0;
}



}

}

fclose(R_STREAM);

}



/*APPLYING LOW PASS FILTER USING 3 X 3 KERNEL*/



void LowPassFilter( int kernel[K][K])
{

int i=0, j=0, m=0, n=0,pixel=0;
FILE *fp;


/*CREATING NEW FILE FOR FILTERED IMAGE*/

fp=fopen("filter.pgm","w");
fprintf(fp,"%s",ch);

for(i=0;i<3;i++)
{
for(j=0;j<3;j++)
{
sum=sum+kernel[i][j];
}
}

for(n=0;n{
for(m=0;m {

if( ((n-1)<0) && ((m-1)<0) ) /*UPPER LEFT CORNER*/
{
array[0][0]=0;
array[0][1]=0;
array[0][2]=0;
array[1][0]=0;
array[1][1]=pixel_array[n][m];
array[1][2]=pixel_array[n][m+1];
array[2][0]=0;
array[2][1]=pixel_array[n+1][m];
array[2][2]=pixel_array[n+1][m+1];
}
if( ((n-1)<0)>128) ) /*UPPER RIGHT CORNER*/
{
array[0][0]=0;
array[0][1]=0;
array[0][2]=0;
array[1][0]=pixel_array[n][m-1];
array[1][1]=pixel_array[n][m];
array[1][2]=0;
array[2][0]=pixel_array[n+1][m-1];
array[2][1]=pixel_array[n+1][m];
array[2][2]=0;
}
if( ((n+1)>63) && ((m-1)<0) ) /*LOWER LEFT CORNER*/
{
array[0][0]=0;
array[0][1]=pixel_array[n-1][m];
array[0][2]=pixel_array[n-1][m+1];
array[1][0]=0;
array[1][1]=pixel_array[n][m];
array[1][2]=pixel_array[n][m+1];
array[2][0]=0;
array[2][1]=0;
array[2][2]=0;
}
if( ((n+1)>63) && ((m-1)<0) ) /*LOWER RIGHT CORNER*/
{
array[0][0]=pixel_array[n-1][m-1];
array[0][1]=pixel_array[n-1][m];
array[0][2]=0;
array[1][0]=pixel_array[n][m-1];
array[1][1]=pixel_array[n][m];
array[1][2]=0;
array[2][0]=0;
array[2][1]=0;
array[2][2]=0;
}
if( (n==0) && ((m-1)>=0) && ((m+1)<=64) ) /*UPPER BORDER*/
{
array[0][0]=0;
array[0][1]=0;
array[0][2]=0;
array[1][0]=pixel_array[n][m-1];
array[1][1]=pixel_array[n][m];
array[1][2]=pixel_array[n][m+1];
array[2][0]=pixel_array[n+1][m-1];
array[2][1]=pixel_array[n+1][m];
array[2][2]=pixel_array[n+1][m+1];
}
if( (n==63) && ((m-1)>=0) && ((m+1)<=64) ) /*LOWER BORDER*/
{
array[0][0]=pixel_array[n-1][m-1];
array[0][1]=pixel_array[n-1][m];
array[0][2]=pixel_array[n-1][m+1];
array[1][0]=pixel_array[n][m-1];
array[1][1]=pixel_array[n][m];
array[1][2]=pixel_array[n][m+1];
array[2][0]=0;
array[2][1]=0;
array[2][2]=0;
}

if( (m==0) && ((n-1)>=0) && ((n+1)<=64) ) /*LEFT BORDER*/
{
array[0][0]=0;
array[0][1]=pixel_array[n-1][m];
array[0][2]=pixel_array[n-1][m+1];
array[1][0]=0;
array[1][1]=pixel_array[n][m];
array[1][2]=pixel_array[n][m+1];
array[2][0]=0;
array[2][1]=pixel_array[n+1][m];
array[2][2]=pixel_array[n+1][m+1];
}
if( (m==127) && ((n-1)>=0) && ((n+1)<=64) ) /*RIGHT BORDER*/
{
array[0][0]=pixel_array[n-1][m-1];
array[0][1]=pixel_array[n-1][m];
array[0][2]=0;
array[1][0]=pixel_array[n][m-1];
array[1][1]=pixel_array[n][m];
array[1][2]=0;
array[2][0]=pixel_array[n+1][m-1];
array[2][1]=pixel_array[n+1][m];
array[2][2]=0;
}
if( ( (n-1) >0 ) && ( (n+1) <>0 ) && ( (m+1) <128 ) ) /*ALL PIXELS INSIDE BOUNDARY*/
{
array[0][0]=pixel_array[n-1][m-1];
array[0][1]=pixel_array[n-1][m];
array[0][2]=pixel_array[n-1][m+1];
array[1][0]=pixel_array[n][m-1];
array[1][1]=pixel_array[n][m];
array[1][2]=pixel_array[n][m+1];
array[2][0]=pixel_array[n+1][m-1];
array[2][1]=pixel_array[n+1][m];
array[2][2]=pixel_array[n+1][m+1];


}
for(i=0;i<3;i++)
{
for(j=0;j<3;j++)
{


pixel=pixel+( array[i][j] * kernel[i][j] );

}

}

pixel=(int)(pixel/sum);


printf("%d ", pixel);
fprintf(fp,"%d",pixel);
fprintf(fp," ");

}
fprintf(fp,"\n");
}

fclose(fp);

}

/*RANDOM NUMBERS*/

int RandomNumbers()
{


int index=0,num=0, limit=0,flag=-1;
int random_integer[MAX2];
int lowest=1, highest=MAX1;
int range=(highest-lowest)+1;
int count=0;
int index2=0;



srand((unsigned)time(0));
for(index=0; index random_integer[index] = lowest+(int)(range*rand()/(RAND_MAX + 1.0));

}


unique_rand_int[count]=random_integer[count];
limit++;
for(index=0;index num=random_integer[index];


for(index2=0; index2 if(num==unique_rand_int[index2]){
flag=1;
break;
}
else flag=0;
}

if(flag==0)
{
limit++;
count++;
unique_rand_int[count]=random_integer[index];
}
}

return count;
}



/*NOISE ADDITION BY RANDOM NUMBERS*/


void AddNoise()
{
int num=0;
FILE *F_N;
int m=0, n=0,pixel=0, count=0;

F_N=fopen("AddNoisetoFilter.pgm", "w");

fprintf(F_N,"%s",ch);

num=RandomNumbers();
for(n=0;n {
for(m=0;m {
if(count {
if(pixel_array[n][m]<=127)
pixel=pixel_array[n][m]+unique_rand_int[count];
if(pixel_array[n][m]>127)
pixel=pixel_array[n][m]-unique_rand_int[count];
count++;
}

if(count>=num)
{
count=0;
}

printf("%d ", pixel);
fprintf(F_N,"%d",pixel);
fprintf(F_N," ");

}

fprintf(F_N,"\n");
}


}

Friday, September 7, 2007

Power Line Communication

Powerline Communications, is a breakthrough communications technology for high-speed data, voice and media transfer over existing power lines. PLC provides home networking and broadband internet access using existing electric infrastructures without the need to install new networks. PLC is a cost effective wide broad band telecommunication system offering high speed, opening a wide range of new business opportunities and improving the operations of the electric system.

Power lines, unlike fiber-optic lines, are already installed in U.S. homes. Current power line speeds are 3-4 megabits per second, but a new generation of modem chips has opened the door to in-home access at about 10Mbps. The average U.S. broadband connection speed obtained through cable or DSL now stands at about 2Mbps.

36 million U.S. homes are connected to the Web through broadband. By next year, the number of broadband subscribers in the U.S. will surpass the number of Dial-up users and comprise more than half of the nation’s 83 million Internet households. Fees for broadband access, meanwhile, are dropping by double-digit percentage points annually.

South Korea, with a population about one-sixth that of U.S., has 17 million broadband users, an 80% percent penetration rate. More importantly, those users pay no more than American broadband subscribers and get access speeds at least four times those of the U.S. broadband; and often as fast as 20Mbps. Japan has 25 million subscribers who receive such super-fast services. In both markets, access fees average about $20 a month, significantly less than what Americans pay.

There are about 140 million broadband subscribers worldwide and this number is expected to become more than double by the year 2008.

For utilities, existing power lines are capable of delivering the radio frequencies that carry broadband data. What they need to install are fiber lines that carry bits of data from a central distribution office and “inject” them onto localized power grids. From there the data can be distributed to individual homes. A bonus for consumers is that once a home is subscribed, any power outlet can be used to link all components capable of networking, eliminating the need to invest in Ethernet cabling or a wireless system.

PLC promises high-speed data transmission with higher data rate than ADSL without adding new wiring, and it enables broadband services like internet and VoIP easily through existing power lines.

As there remain technical problems on distance and quality of telephone lines, broadband Internet service has not been expanded in Russia. The 200Mbps PLC modems shipped to Russia promise high speed communication services over access networks. They consist of a head end, which receives high-speed communication signals, a repeater, which amplifies the signals, and customer premises equipment (CPE). That is installed at the subscriber house. The modems employ the orthogonal frequency division multiplexing (OFDM) modulation scheme, and achieve low-noise communications by superimposing many sub-carriers. PLC chips by DS2, a Spanish chip vendor, are adopted in the modems.

Communication deployment of PLC has been accelerated in Europe and USA. In Korea, regulations on PLC will be eased in this month to promote PLC deployment. Only experiments are allowed in Japan due to concerns that the PLC radiation could cause interference to other radio receivers. Deregulation of PLC services in Japan is expected after the influence of PLC on radio receivers is evaluated.

Broadband over Power Lines (BPL), is also known as power-line internet or Powerband is the use of PLC technology to provide broadband Internet access through ordinary power lines. A computer (or any other device) would need only to plug a BPL “modem” into any outlet in an equipped building to have high-speed Internet access.

BPL offers obvious benefits over regular cable or DSL connections: the extensive infrastructure already available would appear to allow more people in more locations to have access to the internet. Also such ubiquitous availability would make it much easier for other electronics, such as televisions or sound systems to hook up. However, variations in the physical characteristics of the electricity network and the current lack of IEEE standards mean that provisioning of the service is far from being a standard, repeatable process and the amount of bandwidth a BPL system can provide compared to cable and wireless is in question. High-speed data transmission or broadband over the Power Line uses the electric circuit between the electric substations and home networks. A standard used for this is ETSI PLT.

PLC modems transmit in medium and high frequency (1.6 to 30 MHz electric carrier). The asymmetric speed in the modem is generally from 256kbit/s to 2.7 Mbit/s. In the repeater suited in the meter room the speed is up to 45 Mbit/s and can be connected to 256 PLC modems. In the medium voltage stations, the speed from the head ends to the internet is up to 135Mbit/s. To connect to the internet, utilities can use optical fiber backbone or wireless link.

The functionality of the system is very simple. An MV Node injects data coming from the Network Operator into the medium voltage power line. Data flows over the electric grid from the substations to the final users’ socket. This data is sent using different frequency bands through the low voltage lines, where repeaters are installed. A repeater regenerates the PLC signal on the low voltage lines between home units, including an MV Node, an LV Head End, another Repeater and CPEs. Finally, the end user connects the CPE, design for quick and easy installation, to one standard electric socket to receive the data signals.

PLC allows access to new territories in developing countries, remote rural areas, and residence buildings. This technology is generally applicable to all types of network specifications, topologies, frequencies, and voltages. Large- scale projects can be implemented and economically attractive. It provides new sources of revenue for companies. It uses existing power lines so there is no need for additional cable connections. It offers point-point multipoint access via distribution lines for multiple users with only one master modem. It provides fast data access rate in the high frequency band. Helps in improving the use of existing fixed assets, and enables energy efficiencies, cost savings and operational improvements.

Saturday, July 28, 2007

Submarine Cable

Fiber optic technology experienced a phenomenal rate of progress in the second half of the twentieth century. Early all-glass fibers experienced excessive optical loss, the loss of the light signal as it traveled the fiber, limiting transmission distances.

This motivated scientists to develop glass fibers that included a separate glass coating. The innermost region of the fiber, or core, was used to Transmit the light, while the glass coating, or cladding, prevented the light from leaking out of the core by reflecting the light within the boundaries of the core.

In 1990, Bell Labs transmitted a 2.5 GB/s signal over 7,500 km without regeneration. In their experiment in 1998, Dense Wavelength-Division Multiplexing (DWDM technology, which allows multiple wavelengths to be combined into one optical signal), increased the total data rate on one fiber to one terabit per second (1012 bits per second). Today, DWDM technology continues to develop. Because of fiber optic technology’s immense potential bandwidth, 50 THz or greater, there are extraordinary possibilities for future fiber optic applications. Broadband service available to a mass market opens up a wide variety of interactive communications for both consumers and businesses, bringing to reality interactive video networks, interactive banking and shopping from the home, and interactive distance learning.

With the installation of the submarine cable at its landing station at Cox’s Bazar coast, BangladeshMay 20th 2006. Thus a door to new opportunities has opened. Now our country will have a 10-gigabyte data-transfer capacity per second that is, 68 times higher than the current speed. The capacity is considered adequate for the next 10 years and the submarine cable has a life of 15 years. The commissioning of the new systems is supposed to set a landmark in the country’s telecommunications and information communication technology sector as it should tremendously enhance the performance and capacity in this field. IT-enabled value-added services like call centre, tele-medicine, and distance education at overseas universities are among the sectors of the high-tech productive activities. The state-owned Bangladesh Telegraph and Telephone Board (BTTB) and over 100 Internet service providers (ISP) now have a 155-megabyte data-transfer capacity per second. embarked onto the global information superhighway on


The South East Asia-Middle East-West Europe-4 project connects the country with undersea optical fibre passing from Singapore through Malaysia, Thailand, Bangladesh, India, Sri Lanka, Pakistan and a number of Middle-Eastern countries to finally land in France. Bangladesh earlier signed an agreement with 12 countries under a consortium in 2004 to implement a mega-project at a cost of Tk 628 crore for installing the submarine cable down the seabed. But earlier unfortunately, Bangladesh had missed the opportunity to get onto the information superhighway by not joining the group when the submarine cable passed through the Bay of Bengal.

Under the Submarine Cable Project, the government claims to be implementing a number of schemes. Those include construction of cable-landing station, a 22,000-kilometre trunk route and 1,200-1,400-km branch route of submarine cable, installation of optical fibre link between Chittagong and Cox'sbazar along the highway, upgrading the existing Chittagong-Dhaka optical fibre link, installation of optical link between Chittagong and Cox'sbazar over electrical grid line and other installations.


So, the submarine cable is expected to enhance the capacity of telecommunications and data transfer significantly from the present level, giving a high-speed, low-priced Internet and telecommunications gateway to the users to catch up with the fast-moving world. Our country will earn millions of foreign currency using this cable. And through our covered eyes this is only a superficial hope regarding the future of submarine cable. But the actual situation is not so!

It is true that the country has linked itself to the super highway but no policy has been made whatsoever in order to distribute or to make proper use of the connection. The present owner of this submarine cable is ‘Bangladesh Telegraph and Telecommunication board’ (BTTB).

Hostilities arose between the BTTB and the Internet service providers (ISPs). Most of the ISPs can not accept the authority of BTTB over the submarine cable. It is because internet, mobile and fixed telephones service are the prime uses of the submarine cable. BTTB itself has all three services. Therefore question is arising that how will BTTB justly distribute the band width among others.

[The Internet service providers, including the ISPAB and such other trade bodies, are willing to create a committee for monitoring all related matters regarding the submarine cable. They lawfully can not accept the monopoly of BTTB’s control as there is a competition between the service they provide and that of BTTB.] They believe that BTTB neither has any mission nor any vision to make any kind of policy regarding the distribution of the bandwidth.

As we know that the submarine cable that our country is connected to has a capability of carrying 10 gigabytes per second. But the interesting news is that, at present only 155 MB/s data is being transmitted using this cable. It is found that due to the scarcity of necessary equipment and infrastructure the complete capability of the submarine cable can not be used at the moment. And for that reason BTTB has to be satisfied by only 155MB/sec transfer rate. A part of this rate is being used by BTTB and the rest is distributed to the ISPs. At present the STM1 is active for which only 155MB/sec is available from the submarine cable. The BTTB should implement STM 16 as soon as possible. But BTTB does not have the required equipments or infrastructure. In order to do so, altogether BTTB will probably take a few years or so.

[Moreover, complains are arising that BTTB has scarcity of technical personnel with the proper knowledge in this field and it is evident from the present situation that even after the submarine cable has been connected, the relevant works are proceeding very slowly. Therefore it is a matter of great suspicion that whether BTTB will be able to provide the submarine cable service properly or not. ]

The submarine cable connection has been disconnected about 9 times from the time the submarine cable has been connected till today. So the concerned people are suspecting that this kind of ill event may occur many times in future. Therefore, if the internet connection is made solely dependent on submarine cable, then in future the ISPs will be in trouble as BTTB does not have any backup line at the moment. Almost in every country they have backup lines for the submarine cable because at any time the cable can be disconnected in some other country or for some other unavoidable reasons. BTTB claims that they shall provide backup through (Very Small Adapter Satellite) VSAT and microwave. But this backup is very weak and therefore the internet service providers will have to use their own VSAT to implement backup.

Recently, BTTB has announced a new tariff plan for using internet. The rate for getting less than 2MB/sec using CMEU4 submarine cable is as follows: Registration fee=20,000 Taka, Testing fee= 75,000 Taka, for 128 KB/sec, modem charge=84,000 Taka, Yearly fee=21,000 Taka, For 192kB/sec – 1MB/sec, life time charge=1,26,000 Taka and yearly fee= 31,500 Taka. For more than 1 MB/sec, Life time charge= 2, 31,000 Taka and yearly fee= 57,750 Taka.

Through the tariff plan announced by BTTB, the public will not be benefited because there is no chance of any change in the current rate. Since the ISPs will keep their own VSAT as backup along with the submarine cable, the cost of the ISPs will not be any less and therefore they will not be able to provide the internet service to the public at any lower price than that existing at present. But taken as a whole it can be expected that the speed of the present connection will be enhanced. The work that required 1hour previously can now be done in about 40 minutes.

Many media has transmitted the news that Bangladesh will be able to earn millions of foreign currencies by giving voice related services when Bangladesh will be connected to the submarine cable. However, it is also true that VOIP is still illegal in our country. VOIP should be made legal if the business of call centers is to flourish. While distributing bandwidth to the ISPs BTTB has set prohibition regarding the voice related services. As a result businesspersons can not do any business regarding call center at the moment.

Now the fact is that some of the own personnel of the government is doing the VOIP business illegally and for that reason the government is suffering from indecision. But if the government grants special permission then call center can thrive and in that case the law will no longer interfere. If VOIP is made legal, each year the country will earn about 1500 crore taka. Simultaneously, the profit of the government will increase. The country will get increasingly more work related to call center. The amount of work will increase to such an extent that in the first two years there will be job opportunity for about 1 million people.

Therefore the difficulties of taking the full advantage of the submarine cable can be summarized as follows:

  1. Only 155 megabytes per second data is being transferred using this submarine cable that has a capacity of 10 gigabytes per second.
  2. Since there is no strong backup, the ISPs can not rely completely on the submarine cable.
  3. Instead of the price being lowered, the price of the internet service would remain the same.
  4. To provide the full advantage of the submarine cable it would take BTTB few more years.

It is not a fallacy that the submarine cable should open doors to thousands of possibilities. But to prove it real, the proper distribution of bandwidth of the submarine cable is vital. Moreover it has to be distributed throughout our country. At the same time, the confinement imposed by the law of our country should be eliminated. To get the complete advantage of this submarine cable, the government, the non-government organizations, the enlightened society and the general public must step forward and work together in unison. Then only we can expect to see our country accelerate in the field of information technology.

P.S. This Article of mine was published in August 24, 2006 in Daily News Today