Let Update

There's blogging to do, and I'm here.

Early Multiprocessing: A Killer App

mpgWhile lack of software support for multiprocessing has deterred buyers from investing in expensive multiprocessing servers, Schatt and other analysts forecast better days ahead. The proliferation and expansion of LANs and the demands of the mission-critical applications running on them will spur buyers and advance the market over the long run, they said.

“I think the important factor is to look at the big picture. Downsizing [and] enterprise networking [are] definitely going to be a force,” Schatt said. “Probably 1994 will be the year of the super server. We’ll see significant growth in 1993 and 1994.”

According to Schatt, overly optimistic expectations have contributed to the perception that the market has not fulfilled its promise. Market researchers had estimated a $600 million to $700 million market by now, Schatt said, “and it’s not. It’s a $400 million market.”

The sagging economy is one reason for sluggish growth in super servers. Many who might otherwise have bought multiprocessor servers, which generally cost tens of thousands of dollars, have postponed their purchases, Schatt said.

But the key problem, most analysts agree, is the lack of network operating system software, particularly a multiprocessing version of Novell Inc.’s NetWare. With currently available versions of NetWare, only one CPU can be employed.

“The software-compatibility issues are going to be the inhibiting factor,” said Christopher Goodhue, a senior industry analyst at Gartner Group Inc., a market-research firm in Stamford, Conn.

“There just isn’t a lot of support out there for multiprocessing — with the exception of Unix,” Goodhue said.

The timing of an economic upturn is anybody’s guess, but the software solutions may be at hand.

The expected 1993 arrivals of NetWare 4.0 and Windows NT, both purported to support multiprocessing, should stimulate multiprocessor-server sales, according to analysts.

The growing number of corporations moving from mainframe and minicomputer environments down to LANs will drive greater demand for multiprocessor servers, analysts said.

On the hardware side, super servers allow corporations to streamline their networks.

“You can now consolidate four or five or six LANs, each of which is running on a separate PC server, into a super server,” said Jim Edwards, president of Tricord Systems Inc. in Plymouth, Minn.

“It’s a lot less labor-intensive to have one person administer a super server than to administer four or five or six servers,” said Edwards.

Eric Johnson, manager of hardware marketing at NetFrame Systems Inc., a super-server manufacturer in Milpitas, Calif., also cites LAN consolidation as an impetus for server sales.

“People are beginning to recognize the benefits of super servers for simplifying their LANs. They’ve got too many servers; super servers are a way to reduce the cost and headaches of larger networks,” Johnson said.

On the software side, the mission-critical applications being ported from mainframes and minicomputers to LANs need more power and fault tolerance than traditional servers offer.

“When they write [those applications] for a LAN, they are going to need a powerful server,” Schatt said.

Goodhue at Gartner Group agreed. “Clearly, the number of connected PCs is substantial,” he said. “The number of LANs being bridged [and] the number of applications requiring sophisticated software is increasing.

” [They're] going to need a better traffic cop and more powerful systems to handle those applications.”

And buyers are beginning to realize that need, said NetFrame’s Johnson. “The application market is really heating up where the fault tolerance of [super servers] is very attractive to buyers. It’s taken time for end users and LAN administrators to realize these systems are out, that they do solve problems,” he said.

Super-server maker Parallan Computer Inc. of Mountain View, Calif., aims its products more toward mission-critical applications than toward large LAN consolidations.

Davis Fields, vice president of marketing for the company –which IBM recently bought a stake in — said that is how Parallan distinguishes its multiprocessor products from the competition.

“The machine we’ve replaced most often since we started is [Digital Equipment Corp.'s] VAX. We’re very much focused on being midrange,” Fields said, adding that he thinks the other multiprocessor-server manufacturers are more appropriate to the PC level.

“They’re pursuing a path where they’re selling servers like PCs, at the departmental work-group level. We are calling on MIS managers,” Fields said.

Parallan’s competitors might disagree with that assessment, however, as does Schatt at InfoCorp. He said the multiprocessor-server manufacturers are all trying to appeal to the MIS level.

One obstacle to that effort is the small size and relative newness of some of the multiprocessor-server companies.

For example, Schatt said, some buyers are reluctant to commit their mission-critical applications to expensive servers from these firms.

“One of the problems [is] the concerns buyers have about these smaller companies: Will they be here when they’re needed?” Schatt said.

Fields at Parallan said some of his potential customers have expressed those reservations; that hesitation helped to fuel the deal with IBM.

“I’m asking people to think of me as a way to process true mission-critical applications,” he said.

“[Buyers] looked at the statistics about the success rates for startups, and despite the fact they thought the product is technologically superior, they said, `We’re going to have to think about it.'”

Big Blue’s presence has had a dramatic affect on Parallan’s sales, Fields said. “People we had been talking to for over a year came back to us even before the deal was signed, because they were confident because of ” IBM’s backing.

At first glance, price might also seem an inhibiting factor for buyers. But, according to Schatt, the manufacturers’ appeal to the MIS level will favorably affect how corporations view the investment costs of moving to super servers.

In the past, Schatt said, LAN managers have made the server-purchase decisions. LAN managers are likely to be put off by the higher price of super servers as compared with traditional PC servers. But this will not be the case with MIS managers, he said.

“When you have an MIS director who has been buying mainframes, he looks at the [price of the super servers], and it’s a bargain” compared with mainframes, Schatt said.

So, although base prices for the servers in general have come down in recent months, price won’t make or break the super-server market.

“Over the long run, prices are not going to be a major factor,” Schatt said. “As the economy starts to come back, you’re going to see the manufacturers turn more profitable.”

posted by admin in Uncategorized and have No Comments

Server Construction That Replaced Mainframes Still Effective

svrcnCorporations are moving to multiprocessor servers from two directions. Some are stepping down from outdated and expensive mainframe environments, while others are stepping up from the slightly souped-up PCs they’ve been using as file servers.

Either way, buyers seek out power, scalability and fault tolerance, though not necessarily in that order.

“Super” servers flex their multiple processors in one of two architectures. Asymmetric multiprocessing dedicates each processor to a specific task. In symmetric multiprocessing, a more advanced and costlier technology, tasks are distributed to whichever processor is available.

But some network operating systems, most notably Novell Inc.’s NetWare, do not support more than one processor. So compatibility with network operating systems is as important to super-server buyers as multiprocessing power is.

As part of its conversion from an IBM 4381 mainframe to a networked-PC environment, The Dr. Pepper Bottling Co. of Texas has purchased the first of what promises to be several Advanced Logic Research Inc. (ALR) SMP PowerPros, according to Chris Rodriguez, director of management information systems.

The Irving, Texas, soda-manufacturing company bought the initial PowerPro 486/50 server to run a Unix application that tracks time and money spent on the upkeep of vending machines.

Additional servers will be integrated in a combined NetWare and The Santa Cruz Operation Inc. SCO Unix network on Ethernet as downsizing from the mainframe continues, Rodriguez said. SCO Unix supports symmetric multiprocessing.

The ability to do symmetric multiprocessing was “a definite issue. That’s why we chose SCO,” he said.

“Right now [the PowerPro] has just one processor, [but] as we bring more applications across we’re going to add a second processor,” Rodriguez said. “SCO Unix was a strong consideration because it allows you to take advantage of both processors.” Like super servers from a number of other manufacturers, ALR’s multiprocessor machines come with one CPU and slots to accommodate additional processor boards.

Multiprocessing became important for Byer California after the manufacturer of women’s garments moved from a proprietary Prime Computer Inc. shop to the client/server environment of Oracle Corp.’s Oracle relational database-management system.

“We found Oracle ran moderately well on a uniprocessor but was built to run better in a multiprocessing environment,” said Michael Higgins, technical support manager for the San Francisco firm.

The firm chose two Symmetry S2000/750 servers from Sequent Computer Systems Inc. as the servers for Oracle. Byer California also relies on several other Sequent servers, like an S2000/250 that acts as a Network File System server for the company’s native TCP/IP network.

For other buyers, the need to store vast quantities of data led to the purchase of super servers.

At SaTo Travel, a nationwide travel agency based in Arlington, Va., the “super” part of its Tricord Systems Inc. PowerFrame server is its ability to plug in 10G bytes of disk storage, according to Keith Venzke, manager of SaTo settlement plan administration.

The firm needs to store two years of detailed ticket-sales information; its old Network Connection Inc. Triumph TNX server could hold only a year’s worth of stripped-down data, Venzke said.

On the PowerFrame, he said, “I’m storing in excess of 32 million records on-line and I’m doing that without a mainframe or a minicomputer. I’m storing every ticket sold, every ticket refunded, every ticket reissued for the last two years. I have $2 billion in sales at a detailed level on-line.”

SaTo users access the information, stored in several Clipper databases, to answer customer and airline billing questions.

Keyport Life Insurance Co. of Boston also looked to super servers to meet its storage needs and to serve its large group of users.

Leslie Laputz, vice president of information services, said the company initially moved its policy administration system from an IBM 3090 mainframe, whose time it was renting from a vendor, to a 386-based file server from Acer America Corp. The 386 effectively handled the initial conversion of 6,500 policies, but wouldn’t be able to keep up with the company’s larger strategy, Laputz said.

“Our future plans were to convert 120,000 policies. We realized that [the 386] was not going to cut it,” said Laputz. He explained that those plans included adding more than 100 users to the roughly 10 then on the 386 server. Testing proved the 386 unsuitable.

“The 386 slowed to its knees when we tried to put multiple stress tests on it. We looked for [a server] with more throughput and settled on NetFrame [Systems Inc.'s NF 300].”

The NF 300 has since been upgraded to a more powerful NF 450FT.

Laputz said one NetWare network connects to each of the server’s eight I/O processor boards (IOPs) — NetFrame’s proprietary application processor boards. Each of the networks therefore can read the same files and the same disk, he said.

“[The IOPs] allow us to construct eight networks into the same box,” Laputz said. “It spreads the loads such that one network does not become saturated.”

Keyport Life has recently added an NF 400 server as part of a testing system for new applications, and has plans for more super servers, Laputz said.

“We have another 10 file servers, 386s,” he said. “We have ideas of consolidating a couple of those into a NetFrame.”

The scalability of multiprocessor servers is another major selling point. The ability to add processors and huge amounts of RAM and disk storage as databases grow and LANs expand constitutes a fundamental aspect of what makes the servers “super.”

In addition to upgrading his NF 300 to an NF 450FT, for example, Laputz said he has dramatically increased the memory and disk space of his server.

“At the beginning we had 64M bytes of RAM. [We increased] that to 128M bytes, and now we have plans to put 64M bytes on top of that,” he said. “We [initially] had 2G to 3G bytes of storage, and now we have 20G.”

In choosing the Sequent servers, Higgins at Byer California said he wanted a “very scalable, cost-effective open system.

“With the Prime environment, every time a new machine came out, it usually added 25 percent more power and it cost a good deal of money,” Higgins said. “We’d roll out the old, roll in the new and we’d pay for it.

“With Sequent I can double, then double again, and double again its resource capability. I can keep adding more memory boards, more CPU boards, more disk controllers to [achieve] a level of unsurpassed performance,” Higgins said.

Approaches to scaling up for the future differ from user to user. Venzke at SaTo Travel said he’ll continue to upgrade his one Tricord server to handle his growing file load, rather than spread it over several servers. He expects that this will simplify maintenance.

But Dr. Pepper’s Rodriguez has the opposite view; he’s planning to support multiple servers. “The reason we’ve chosen to go with multiple servers instead of one huge box with eight or 10 processors is primarily to keep away from a single point of failure,” he said.

The fault tolerance many multiprocessor servers offer can ease some of that anxiety. Features like Redundant Arrays of Inexpensive Disks protect against data loss by mirroring or duplicating data. Some multiprocessor servers also offer drives that, if a problem occurs, can be removed and replaced without bringing the system down.

One buyer even said he ranks fault tolerance ahead of system performance as a buying criterion.

“The fact of the matter is, yes, they’ve got I/O performance enhancers, but the reason we bought them is not so much performance but fault tolerance,” said Mitchell Green about the Compaq Computer Corp. Systempros that the Cambridge Savings Bank of Cambridge, Mass., has purchased over the past two years.

“I know there are faster servers out there, but we bought [the Systempros] mainly because of the [Intelligent Disk Array] controls,” added Green, the bank’s assistant vice president of information systems. The Systempros’ IDA controllers protect data by mirroring it or striping it over several disk drives.

The bank has four Systempros: one model 386-420, two 386-840s and one 486-840. All four are on a WAN that connects three of the bank’s locations. The three 386 servers, which run on NetWare 3.11, handle office automation and proprietary banking applications. The 486 server runs an item-processing system on SCO Unix.

Fault tolerance was crucial because the bank relies on a small information-systems support staff, Green said.

posted by admin in Uncategorized and have No Comments

When Developing Software, Take It Slow, Man!

dpswIn the 1980s, a decade in which it was impossible to tell the actors from the politicians, it was fitting that the Macintosh gave anyone with a little cash the ability to produce absolutely gorgeous documents with absolutely no content.

Now rapid software-development tools bring the same potential to developers. If you are quite reasonably looking to such tools to reduce your software backlogs, be careful: It’s as easy to abuse them as it is to use them well.

We’ve seen such abuses up close, and they’re scary. Here are a few of the symptoms. If you spot them in your developers, take action quickly.

All interface, no content. You may have seen this one before. The tool lends itself to creating graphical interfaces, so that’s what the developer does first. The interface looks great, but none of the underlying code is present.

This symptom is bearable if the developer knows the interface is just the start. It can even be good — if the developer plans to try out the interface on a few users before filling in the meat of the application.

The danger comes when the developer thinks the interface is the bulk of the job and the application logic is something to “touch up” or “plug in at the end.” When you hear those phrases, watch out: Your development cycle just got longer.

Baroque interfaces. Not every application’s interface has to push the edge of the human engineering envelope. If, for example, your application needs screens in which users must pick one choice from column A and one from column B, why do anything more complicated than that?

Failure to solve the problem at hand. Applications exist to help users automate business tasks or in other ways gain competitive advantages. As long as developers keep those goals in mind, you’re fine. Some tools, however, tend to seduce developers away from those goals and toward other ways of working.

Database front-end development tools, for example, are generally good at table creation and management, but not as good at incorporating programmatic logic. Developers caught up in the spirit of those tools can spend all their time designing complex data relationships that are not relevant to the task at hand. We’re all for future planning, but you can also plan so long that you never get any work done.

Failure to remember the problem at hand. This symptom is the next stage of the previous one. Sometimes developers get so caught up in their tools they completely forget what users want.

Let’s say you want to build a simple application to compare the profit-and-loss sheets of different divisions. Assume the data is in a database. Simple, right? Let users pick the divisions to compare, and then display the profit-and-loss data for those divisions side by side. Hard to mess up.

Unless, of course, your developers realize you have lots of divisions — too many to fit on a single screen. So they work for weeks to find clever graphical ways to show an unknown number of divisions at once. (They could pick a number and ask users if it’s OK, or scroll among many, but that would be cheating.) Meanwhile, the original problem remains unsolved.

None of these symptoms are inevitable. All applications, whether you’re building them with assembly language or Oracle Card, should be subject to periodic review. All developers should present their plans to user representatives for approval.

In short, most of the rules for smart development that have always made sense still make sense; you just have to deal with a faster pace now.

If you’re not being smart and careful about your application development, however, keep an eye out for these symptoms and nip the rapid-development disease before it spreads.

posted by admin in Uncategorized and have No Comments

Computers Aren’t The Math Wizards We Think They Are

catmwThe myth of the computer’s math prowess runs so deep that it’s even built into the name. The verb “compute” comes originally from a Latin root that means “to think,” but during the last 400 years the English word “computation” has become almost synonymous with doing arithmetic.

The less we know about computers, the more likely we are to think of them as giant math machines — a belief that leads to excessive trust in computers’ mathematical abilities, despite their potential for making fundamental errors. As with almost every kind of computer problem, these errors are a result of the decisions made by programmers seeking to find the best combination of low cost, high speed and accuracy of results.

Such trade-offs are impossible to avoid, but it’s important for both the application developer and the user to be aware that such decisions are being made and to be confident that the right mix is being applied to the problem at hand.

There is a growing hazard that as on-board math coprocessors become a standard feature of mainstream chips such as the Intel 486 and Motorola 68040, software-development tools will treat the decisions of coprocessor designers as a default approach to handling math for all applications.

Although coprocessors reflect carefully considered standards and are designed to yield accurate results with reasonable handling of special cases, no single approach should be unthinkingly assumed to be best for all applications.

Why can’t computers pump numbers around their innards as easily as they transport streams of characters during word processing? To begin with, people usually work with powers of 10 (1, 10, 100 and so on) whereas computers work with a “binary” (two-value) system based on powers of two (1, 2, 4, 8 and so on).

This difference isn’t a problem when counting up whole numbers of things: for example, the number 10 in base 10 (one 10 plus zero 1s) is a longer but numerically identical 1010 in base 2 (one 8 plus zero 4s plus one 2 plus zero 1s).

The difference in bases makes life much more difficult, however, when trying to represent fractions: for example, a simple and common decimal value such as one-tenth (0.1) cannot be precisely represented by any finite number of digits to the right of a base-two binary point.

The value 0.1 can be approximated by a binary fraction such as 0.000110011 (1/16 plus 1/32 plus 1/256 plus 1/512), but this still leaves an error margin of more than 2 percent.

Additional digits can be used to make the error as small as desired, but the binary system can never make the error disappear completely.

This binary system wasn’t adopted just to be difficult. In 1947, mathematician Norbert Wiener showed that for any known method of storing complex structures of values, a system based on storing groups of only two possible low-level values would have the lowest cost per unit of information stored.

This led to the modern architecture based on BInary digITS (bits) of data, arranged for convenience in eight-bit bytes, and in words whose size depends on what’s convenient for a particular machine — typically 16 bits on the 80286 and earlier mainstream processors, and 32 bits on most modern designs.

But 32 bits are not enough for counting up the things around us. With 32 positions that can each hold two values, we can count over a range from zero to one less than the 32nd power of 2: that is, over a range from zero to 4,294,967,295, or only 4 trillion and change.

Therefore, it’s common for a programming language to provide an integer data type with a size of 64 bits, able to handle values with a high end of more than 18.4 quintillion (or 18.4 billion billion).

With almost 20 quintillion electronic fingers to count on, a great many things can be done without the complexity of working with binary fractions. For example, large dollar amounts can be multiplied by 1,000 to give a figure in units of 0.1 cent, a precision usually considered adequate for most financial transactions.

In the real world, a 64-bit integer can represent the average distance to Pluto in units of 0.0001 inch.

In situations where a value is known to have a certain range, a programmer can spread the available precision across that range to get the most accurate results possible.

For example, a temperature sensor in Minnesota might need to measure values ranging from minus 80 to plus 100 degrees Fahrenheit: a programmer could take the actual value, add 80 and multiply by 20 million to produce a value that uses most of the available bits in a 32-bit word.

Before such a scaling operation, a “jitter” of one bit would represent an error of almost 1 percent; after scaling, the error from a one-bit jitter would be far too small for concern, much less than the probable margin for error in the temperature sensor itself.

These are three of the least-complex ways of representing real-world numbers in a computer. Simple integers are the electronic equivalent of counting on your fingers. Integers with a multiplication factor (also known as fixed-point numbers) are the equivalent of using fingers to represent fractional units, then working with those units as if they were whole numbers.

Integers with an offset and a multiplication factor (or scaled integers) involve more complex setup calculations than the other integer models, but also make the best possible use of the available precision.

But all of the integer approaches break down in cases where numbers take on a huge and unpredictable range of values. Scientific calculations might work with values as large as the number of atoms of gas in the sun (say “billion” six times, then take a thousand of those), or values as small as the weight in pounds of a single electron (say “one billionth” three times and divide by 1,000).

To handle such a range using integer techniques would require a word size of almost 290 bits, which would be impractical: On-chip registers must provide the space, transfers from memory to processor must move the data, and both the cost of hardware and the time to produce results would get worse.

The floating point

Such problems are typically solved with floating-point procedures, which can be executed in software or built into coprocessor hardware. Floating-point numbers are represented by two components: an exponent, which gives the approximate range of the value in terms of some power of a base value, and a mantissa (sometimes called the significand) that multiplies the exponent to give the more precise overall value.

For example, the decimal value 1,234 has a mantissa of 1.234 and an exponent of 3: The value of the mantissa is multiplied by the exponent’s power of the base (in this case, by the third power of 10).

For any given number of bits to be used overall, a decision must be made as to how many should be allocated to the exponent and how many to the mantissa. With more bits of exponent, we can handle a larger range of values, but with less precision. With more bits of mantissa, we can be more precise, but over a smaller range.

The prevailing standard for floating-point computation is ANSI/IEEE 754-1985, where “1985” denotes the (surprisingly recent) year of adoption. This standard is the basis of both the Intel math coprocessors and the Standard Apple Numerics Environment (Motorola coprocessors augmented by proprietary software) on Macintosh systems.

This standard defines a “long real” number format using 64 bits: one bit for the sign of the number (plus or minus), 11 bits of exponent (representing the range from -307 to +308) and 52 bits of significand.

Assuming that the significand will always be at least 1 but less than 2 (remember, this is base 2), the first bit of the significand can be assumed to have a value of 1 and it need not be actually stored. This format can represent a range of values from 4.19 * 10 -307 up to 1.67 * 10-308.

Beyond the 64 bits that a value is supposed to use in the outside world, IEEE-standard floating-point hardware uses an additional 16 bits (making 80 bits in all) for “temporary real” numbers.

This additional precision, if used for intermediate calculations, can greatly reduce the vulnerability of your results to cumulative errors such as those that occur when numbers are rounded up or down.

Retaining this precision, however, requires one of two things. Eighty-bit expressions must be moved between processor and memory, which is time-consuming, or a complex calculation must be managed to keep those values in on-chip registers in a way that minimizes those transfers. This task demands extremely sophisticated global knowledge of algorithms, beyond the capability of most compilers.

IEEE 754 is not universally loved. It has been criticized as giving too much space for range and not enough for precision. Tom Ochs, president of Structured Scientific Software in Albany, Ore., has calculated that the estimated volume of the universe — measured in units of quantum space, believed to be the smallest fundamental unit of volume — is only about 6.8 * 10-177, smaller than the maximum IEEE value by a factor of 10-130 (say “billion” 14 times and multiply by 10,000).

At the same time, there is a gap between adjacent IEEE values — for example, between -4.19 * 10-307 and 4.19 * 10-307. That gap may seem small, but there is only one IEEE number available for every block of 3.7 * 10-158 pockets of space. By this argument, based on things that we might actually want to count, the IEEE standard makes a bad trade-off between range and precision.

Avoiding errors

We are always free to solve math problems on a computer in precisely the way that we would solve them by hand: treating numbers not as patterns of bits but as strings of decimal digits, and doing the equivalent of long division or multiplication in the same way that we would do it with pencil and paper. This eliminates any problems of arbitrary limits on either range or precision.

Binary-coded decimal (BCD), in which each digit of a number is represented separately rather than converting the entire number to binary (with attendant errors), is a traditional approach to business computations that do not allow any room for errors.

The REXX procedures language (included as a utility in OS/2) and many dialects of the LISP programming language also support infinite precision mathematics, as do symbolic tools such as Soft Warehouse Inc.’s Derive, Waterloo Maple Software’s Maple and Wolfram Research Inc.’s Mathematica.

These techniques reduce raw computational speed, but they also increase productivity in getting the program written and confidence in the correctness of results.

posted by admin in Uncategorized and have No Comments

Borland Destroyed Itself, Frankly

bamcWell, yes, $99 prices do get our attention, don’t they?

But an introductory price under a hundred bucks certainly isn’t going to be the key to whether the Redmond gang succeeds in its effort to muscle into the serious database market — one of only two applications-software areas where it has never been able to compete.

(The other area? Async communications, where Microsoft sold briefly seven years ago one of the worst programs ever shipped. It was named — eerily — Microsoft Access. I think I might have been a little more sensitive to history, Mr. Gates.)

Nope, $99 prices won’t do it. All Gates & Co. are doing with that teaser price is getting our attention and asking us to take a look — in effect, asking us to pick up the production and distribution costs, plus a pence or two for the shareholders, for our evaluation copies.

Scared of commitment

Committing to a database program is among the most difficult and worrisome decisions information systems professionals have to make. Given the huge investment in databases, in-house development expertise and tools, and end-user training, walking away from one database and choosing another is a decision few of us look forward to.

Indeed, that inertia is the single largest factor shaping the corporate market for database packages for PCs and networks.

Borland, of course, has been a direct beneficiary of that inertia. By acquiring both Paradox and then later dBASE, Borland bought market share by the barrelful. Many thought Borland overpaid for dBASE, but by assembling the dominant market share between its two high-end products, Borland was able to stake claim to almost half the installed base of high-end PC database products.

Borland bought itself some security. Market share, with its attendant upgrade income, and the time to develop follow-on products are often worth buying.

By contrast, Microsoft has less than a third as large a slice of the market as Borland, via last summer’s purchase of Fox.

By the usual rules of the PC applications-software business, Borland ought to be well-positioned to stay on top. But the well-publicized delays this year in getting Windows versions of Paradox and dBASE to market (as well as critical delays in other new Borland products) have undermined that market-dominance security.

I say this with some difficulty. I told audiences in speeches in late 1991 that I thought 1992 was going to be The Year of Borland. With rich, far-reaching products such as Paradox 4.0, Paradox for Windows, Quattro Pro 4.0, Quattro Pro for Windows and dBASE for Windows coming this year, Borland was about to enter the most remarkable new-product-release cycle we’d ever seen in the PC software business.

The stock was going to soar.

In fact, of course, too many of those products slipped — and badly. And the stock got slaughtered.

If, as I suspect, Borland is about to lose serious database market share to Microsoft, coming in from left field, and even more market share to the Oracles of the world, coming down from the big-iron universe, it will have squandered an extraordinary franchise.

Paradox for Windows is still one of the most dazzling products I’ve seen. And it’s hard to knock a guy who says, as Borland’s Philippe Kahn is wont to, “eet weel sheep when eet is ready, and not before.”

But Borland’s customers have been on the hook for a very long time. During that time, notwithstanding our reluctance to make such a painful change as switching from one database product to another, the Windows imperative has created a real urgency in many shops to find a Windows front end to existing databases.

Enter Microsoft Access.

I’ve only worked with Access a little, so I can’t claim expertise with it yet. But everything I see, as I peel back the layers of the onion, I like.

I am not especially cheered by the prospect of Microsoft dominating yet another area of Windows apps as thoroughly as it does word processing and spreadsheets. But when you make scary decisions, it helps to go with strength. And Access looks very, very strong.

posted by admin in Uncategorized and have No Comments

RAID’s Baby Stages Built A Great Future For Data Storage

rbsbadsFeeding on the proliferation of PC LANs, user interest in RAID — Redundant Arrays of Inexpensive Disks — is building rapidly.

“As the network applications become more critical to the company, you’ve got to take significant steps to make sure that when the network goes down it doesn’t take everything with it,” said Roy Wilsker, manager of end-user servers for Kendall Co., a health-care and adhesives company based in Mansfield, Mass.

Wilsker said one of those steps he is considering is installing RAID products in his network servers.

“It’s believed to be potentially a $5 billion market and there’s not a clear market leader, so everyone’s rushing in,” said Seth Traub, storage-market analyst for International Data Corp., a market-research company based in Framingham, Mass.

Recently unveiled new products include Micropolis Corp.’s Raidion disk array subsystems, AST Research Inc.’s array controllers for its server line and IBM’s AS/400 RAID array.

In order to sift through the profusion of recently released RAID products, users like Wilsker need to clearly understand the technology — which lets several disk drives work together to boost reliability and performance — observers said.

“Anybody who is buying anything that’s complex and doesn’t understand it is looking for trouble,” said Joe Molina, chairman of the RAID Advisory Board, which was created four months ago to help clear up some of the confusion.

The Advisory Board was in part the brainchild of Molina, who spent the last decade promoting the Small Computer System Interface (SCSI).

Tired of facing customers with unfamiliar technology, Molina said he left a SCSI marketing job 10 years ago to start Technology Forums. The Lino Lakes, Minn., firm is educating both vendors and users on data storage-related topics.

RAID is currently in a positionsimilar to that of SCSI 10 years ago, according to Molina, and Technology Forums serves as a facilitator for vendors who want to elevate RAID beyond buzzword status.

So far, 24 companies, including IBM, Digital Equipment Corp., NCR Corp. and Seagate Technologies Inc., have signed on as board members.

Closer to being an advocacy group than a standards-setting body, the RAID advisory board is trying to sort through the technical fine points that separate RAID products and develop guidelines to make the products more uniform.

For example, the group wants to encourage all disk drive makers to make their drives’ spindle-synchronization mechanism work the same way, Molina said. If they did, RAID developers wouldn’t have to accommodate different spindle-synchronization signals, a bit of re-engineering that can add to a RAID product’s price.

Until standards are set, RAID can mean different things depending on a particular vendor’s point of view. Most vendors look to an academic paper written by professors at the University of California at Berkeley in 1987 to develop their form of RAID.

In that paper, titled “A Case for Redundant Arrays of Inexpensive Disks,” the technology was grouped into several categories (see chart, Page 81). Although RAID categories are called levels, they are not hierarchical.

Simply put, a drive array ties disk drives together so they can share the task of storing data. Should one of the drives fail, other drives in the array are there to keep the data intact. The RAID products spread the data around differently, depending on what type — or level — of technology is employed.

Generally, RAID employs striping, which distributes data evenly across the disks, and mirroring, which makes duplicate copies of data on separate disks.

Each type of RAID has its own advantages and disadvantages. RAID 5, for example, can cause drives to perform slower than RAID levels 0 or 1 because it takes extra time to compute and write error-correction data. However, RAID 5 affords the high level of data protection that many users require for their network servers.

In some RAID configurations, the drives store data faster together than a single drive alone. So a grouping of less-expensive slower drives can offer greater throughput than a faster, more expensive drive. For example, in some mirrored arrays, the controller reads alternate clusters of files from each drive simultaneously, then pieces the information together and delivers it to the PC. Thus, reading time is cut significantly when two drives are linked through mirroring.

However, some vendors implement those RAID levels with slight differences; some support a given level in hardware, and others support a level in software.

Still others have developed their own type of RAID. For example, Storage Computer Corp., of Nashua, N.H., is now selling a patented hardware design it calls RAID 7 (see story, below). The subsystem is the first RAID architecture to implement a truly standards-based data storage system, according to company officials.

Storage Computer Corp.’s president says his company has created a superior RAID product by defying conventional wisdom.

Ted Goodlander isn’t shy about saying that the Nashua, N.H., firm’s RAID 7 storage subsystem doesn’t fit into the six Redundant Arrays of Inexpensive Disks categories followed by most disk-array vendors.

Indeed, Goodlander claimed that Storage Computer (which is known as StorComp) was working on the basic technology for the product long before the publication of the so-called Berkeley papers, an academic work on disk arrays written by three University of California computer-science researchers that is often cited as the foundation of RAID products (see chart, Page 81).

“So many people took that paper and said it was the Holy Grail,” Goodlander said.

Unlike other varieties of RAID, in which the disk drives rotate in sync, StorComp’s RAID 7 subsystem has an asynchronous design, he said. RAID 7 moves the drive heads independently of each other to increase the number of reads and writes that the array controller can handle, Goodlander said.

StorComp’s RAID 7 also utilizes special algorithms that help prevent the controller’s data cache from becoming saturated. As a result, the company claims its RAID 7 subsystem transfers data two to four times faster than other RAID subsystems and still provides fault-tolerance for as much as 141G bytes of data.

The RAID 7 desktop units, available now, start at $15,900 for a host interface, power supply and software license. Hard drives range from $400 to $4,000. A base system can be expanded to support 12 disks and two host interfaces.

Without standard benchmarks, it is difficult to know how the StorComp subsystem stacks up against other RAID products, said Seth Traub, storage-market analyst for International Data Corp., a market-research firm in Framingham, Mass. Industry standard benchmarks are still being formalized.

posted by admin in Uncategorized and have No Comments

The Birth Of Acrobat And Adobe’s Screw-ups

ababFor a company that typically shies away from preannouncing products, Adobe Systems Inc.’s recent formal unveiling of its Acrobat document-interchange technology seems a little out of character.

While almost everyone agrees that Adobe is working on an important technology, the hoopla over the announcement turned out to be little more than a public relations effort.

Yes, Adobe unveiled the formal name of the technology — Acrobat — performed a “live” demonstration and announced two components, Acrobat Viewer and Distiller, which will be delivered within six months. (See PC Week, Nov. 23, Page 6.) But the Mountain View, Calif., company has previewed the technology, code-named Carousel, publicly for the past year, so most of this was not news.

As one Adobe official put it, “This was meant to bring the uninitiated up to date.”

So why the big splash for a technology that has been public knowledge for quite some time? The answer may well be that Adobe is feeling the heat from Microsoft Corp. and from cloners of its PostScript page-description language (PDL), and this was its attempt to divert the focus.

“We are at a crossroads,” said one Adobe official, who did not want to be identified. “We are definitely being hurt by Microsoft on the font front and the clone vendors on the PostScript [front].”

Adobe has dominated the font business with its Type 1 product for some time, but Microsoft, as it has done in many other arenas, has jumped in, gunning for a piece of the business.

When Microsoft rolled out Windows 3.1 earlier this year, the company included its own TrueType font technology in the operating environment, firing a direct salvo at Adobe.

Microsoft’s TrueType push is off to a good start — largely because the company is giving away 13 free typefaces with its Windows 3.1 operating system.

In other words, Microsoft is telling the average user, why pay for fonts when you can get them for nothing. Adobe sells its Adobe Type Manager font package for $99.

Microsoft is betting that users will find it hard to wean themselves of TrueType fonts once they get used to them. When they are ready to become serious font users, more than likely, they will turn to the Redmond, Wash., software giant’s retail font packages, priced at $69.95.

“The writing is on the wall for [Adobe's] Type 1,” said Rob Auster, vice president of electronic printing at market-research firm BIS Strategic Decisions in Norwell, Mass. “Microsoft is putting nearly a million copies of Windows in the market, which means TrueType is going to be the de facto font standard.

“Adding insult to injury is the fact that Hewlett-Packard [Co.] is bundling TrueType with its [LaserJet 4] printer,” Auster added.

He cautioned, however, that Type 1 is not going to go away overnight.

Clones hurt PostScript business

Adobe is also not doing too well on the PostScript licensing front, where it has made most of its money. It has been registering only modest gains in revenue mainly because of the proliferation of PostScript clones.

“The clones are very reliable. … There’s no longer a stigma attached to them,” said Auster. “The PDL market is absolutely price-driven, and vendors will pick and choose [based on the price].”

Adobe has to drop its royalty fee if it wants to compete in the marketplace, Auster added.

Adobe, of course, is not sitting idly by — it has signed several licensing deals with IBM, Lotus Development Corp. and Compaq Computer Corp. for its font technology.

Also, Adobe has typically serviced the serious, high-end users, who will probably stay with the company. But it has to realize it can squeeze only so many dollars out of that niche.

So what does the future hold for Adobe? Most observers believe Adobe faces difficulties with its current core business, but they see a silver lining in the Acrobat technology.

For example, Piper Jaffray, an investment firm based in Minneapolis, boosted its stock recommendation based on the Acrobat announcement.

“[Acrobat] is the second making of Adobe. If they are successful, it will be a huge win for them,” said Jonathan Seybold, publisher of the Seybold Report on Desktop Publishing, a newsletter published in Malibu, Calif.

posted by admin in Uncategorized and have No Comments

CDPD: It Died So Faster Cell Service Could Live

cdpdNearly 100 years after Guglielmo Marconi successfully transmitted and received electronic signals via his wireless telegraphic invention, a budding wireless network is poised to become the mobile data transmission route of the future.

The Cellular Digital Packet Data (CDPD) network is designed to let cellular subscribers send digital data from mobile PCs over existing cellular networks. Mixing a myriad of technologies, including the ability to send packets of data over cellular airwaves instead of traditional analog transmission, CDPD could provide PC users with quick and reliable data transmission, analysts said.

“CDPD is going to happen and it will be a tremendous challenge for RAM [Mobile Data Inc.] and Ardis,” said Paul Callahan, senior industry analyst for Forrester Research Inc., a market-research firm in Cambridge, Mass. “When it’s available, CDPD will basically be as cheap as [RAM Mobile's] Mobitex or Ardis’ on-line service charge, and the modems could be a lot smaller if not the same size” as traditional modems, he added.

CDPD will initially target industrial users with applications ranging from telemetry and point-of-sale to transportation. Its broader success, however, will depend on several factors: how quickly it can be rolled out, price, reliability of the network and the breadth of products designed for it, observers said.

Drumming up support

The technology has the backing of nine cellular carriers, which make up the bulk of U.S. cellular service providers, while PC makers such as IBM and Apple Computer Inc. are promising a host of CDPD applications for early next year.

The cellular carriers face the challenge of upgrading their national networks to handle data as well as voice — and handling it more efficiently than many voice calls are handled today. Many users of cellular services complain about lost signals when traveling away from cell sites and the annoyance of dealing with different phone systems.

“The problems with voice are the same problems with data — you check in and go through the whole thing and it seems to be an inconvenience,” said Richard Barg, an attorney in Atlanta, who uses cellular phones. “[With cellular] data, if it’s using a nationwide network using the same standard, you have something analogous to interstate commerce: no restrictions and barriers.”

Cellular carriers plan to add to their networks equipment such as mobile data gateways, enabling data packets to be routed to their proper destination. Base station receivers also will be required at each cell site for mobile radios sending and receiving data, said Rob Mechaley, vice president of technology development for McCaw Cellular Communications Inc. in Kirkland, Wash.

A key aspect of CDPD lies in a technique called “channel hopping,” which parcels out data over existing analog voice channels that are not being used for regular voice calls. Channel hopping differs from another cellular technique called cellular spectrum, which is used by Cellular Data Inc. (CDI) and dedicates the radio spectrum to broadcast signals to bay stations.

As a result, CDPD data calls will be subject to less interruption, and data-transmission speeds will reach 19.2K bps on a 30KHz channel; competing networks such as CDI include capacities of up to 4,800 bps. Ardis earlier this year opened up its protocols to support transmission speeds as high as 19.2K bps.

Modem makers incorporating the CDPD specification will likely have to add new circuitry to their standard Hayes AT Command Set modems, while software developers will simply need to add new APIs to their messaging applications, said Craig McCaw, chairman and CEO of McCaw.

“The fundamental driver of CDPD is incredibly simple,” he said. “All the facilities and power are in place, and the spectrum has been allocated.”

As a result, costs should remain comparable to existing products; for example, standard CDPD-compatible PCMCIA Type II modems are expected to be priced around $360 to $380, he said.

Other costs, however, may be more prohibitive. Early users could face steep subscription charges — as much as $50 to $75 per month — as well as connect-time charges for each data transmission, analysts said.

The monthly charge for using RAM Mobile’s Mobitex packet-radio network is $25, plus 5 cents for each 100-byte file or 12.5 cents for a 512-byte file.

“I would not pay $100 per month for transmitting data,” said Barg, the Atlanta attorney. “The current cost of carrying a laptop around is $100 per month.”

While Ardis and CDI plan to link their networks to CDPD, RAM Mobile plans to compete head to head with the technology, said Don Grust, product manager for RAM Mobile in New York.

“Whereas CDPD is not even in release 1.0, we are on release 13 and 14 and have plans for 15 and 16 next year,” said Grust.

Mobitex, which is expected to cover 90 percent of the U.S. population by mid-1993, will provide a more reliable method of sending E-mail over the airwaves for several years to come, he added.

At least one large corporate PC site, however, is looking forward to CDPD. “To me it’s a very exciting technology,” said Sheldon Laube, national director of information and technology at Price Waterhouse, a New York-based accounting firm. “If you need to let staff in the field contact home, they can do so without finding a phone.

“No technology is perfect,” he added, “but we want to make [CDPD] work, and the carriers want to make it work.”

posted by admin in Uncategorized and have No Comments