Product Watch: Sony Unveils Magneto Optical Drive. Pulse Has Exporter For Maya. Maxoptix Dazzles Crowd.

December 8, 2000



San Jose, CALIF. — Sony Electronics announced its fifth generation 5.25-inch magneto-optical (MO) drive, nearly doubling the capacity of its popular 5.2GB MO drive to 9.1GB, and increasing its data transfer rate performance by up to 20 percent. The new multifunctional drive offers 14 times the capacity of the first generation 650MB MO drive, and is backward read compatible with all four previous generations of media. The internal Sony SMO-F561 MO drive offers high speed access to 9.1GB of data, archival capability, reliability and portability, making it ideal for such data-intensive applications as document and medical imaging, telecommunications, multimedia, graphics, design and audio/video editing.

“The development of 14X MO technology demonstrates Sony’s consistent technical leadership, spanning more than a decade since the introduction of the first MO drive,” said Toshi Kawai, marketing manager of MO drives for Sony Electronics’ Component Company. “We are committed to providing our customers with the capacity levels they require, while maintaining backward read compatibility with previous generations of media.” The increased capacity was achieved by using magnetically-induced super resolution (MSR) technology. MSR allows optical drives to read recorded marks on the disk that are smaller than the laser beam spot size, while reducing the potential for “cross-talk” from adjacent tracks. This technique allows for a significant increase in track density, while maintaining data integrity. This generation of MO technology also incorporates Land and Groove Recording, a technology that permits data to be recorded on the surface space of both the recording layer (Land) and the deeper spiral track (Groove). This recording approach allows for a narrower track pitch, resulting in more efficient use of the total recording surface of the disk.


San Francisco, CALIF. — Autonomy Corporation plc, a global leader in infrastructure software for the Web and the enterprise, announced the availability of a peer-to-peer implementation of its technology for the automated processing, managing and delivery of unstructured information. Autonomy’s Personal Distributed Query Handler (PDQH) implementation allows multiple networked PCs to work as one seamless system. Enabling team members to harness the combined processing and storage capacity of their personal computers with the information they own, Autonomy PDQH transparently unites resources with information to solve business critical issues from all locations. Autonomy PDQH technology automatically categorizes, tags, links and delivers documents stored on an individual’s hard drive and corporate file servers, providing a single point of access for all intellectual property in an organization. Actual processing is evenly distributed across all networked machines creating a highly scalable information exchange.

For over two years, Autonomy’s DQH technology has allowed large systems, composed of multiple servers working together and reporting to one controlling system, to appear as one, virtual system. By distributing complex systems across multiple computers, Autonomy has powered some of the highest load and largest volume unstructured information systems in the world while keeping hardware and management costs exceedingly low. Autonomy is able to provide fast and highly scalable systems that process hundreds of gigabytes of content within tenths of milliseconds. Autonomy PDQH is a true peer-to-peer version of this technology and comes to market tested and proven. To effectively leverage intellectual capital, people need instant access to both published and work-in-progress information. In both cases, access is hampered by the necessity to manually publish the documents, spread sheets, Web pages, presentations, etc. into a central repository or portal. In addition, for work-in-progress, which is rarely published, someone first must find out who is working on a particular project and then must request the information. Ultimately this is inaccurate, inefficient, and unsustainable in a large organization. The only way to effectively incorporate intellectual capital is to teach computers how to understand it. Autonomy’s software provides that intelligence. It enables computers to form an understanding of text, Web pages, e-mails, voice, documents and people’s area of expertise. Because of this ability, it automates the process of categorizing, managing, personalizing and delivering of unstructured information. In a peer-to-peer environment, this enables access to both published and un-published work without having to know who specifically is working on it. For example, an employee researching market conditions in Asia for an upcoming product launch would automatically be provided with a report that a co-worker in Hong Kong is researching about the manufacturing industry in China.


Reston, VA. — W. Quinn Associates, Inc., developers of the industry-standard StorageCeNTral quota management and disk reporting suite for Microsoft Windows NT/2000, announces the general availability of SpaceMaXX SRM, a family of storage resource management reporting tools for desktops, server-attached storage, storage-attached networks, and network-attached storage devices. SpaceMaXX SRM monitors disk space consumption at the partition level; enables usage thresholds to be set on specific drives; triggers web-based “best practice” reports at scheduled times, as requested, or as disk utilization thresholds are reached; and takes any pre-determined action to regain wasted disk space. The product family includes SpaceMaXX SRM Server, which runs on Windows NT/2000, 9x servers and NAS devices; and SpaceMaXX SRM Professional, which runs on single Windows NT/2000, 95, 98, or Millennium Edition (“ME”) desktop computers.

By running SpaceMaXX SRM’s automated “best practice” reports, IT professionals can immediately identify the reason for explosive storage growth and determine how best to control it. These reports can specifically highlight non-business Web downloads, such as MP3 music files; unused or stale files to be archived or deleted; duplicates; old desktop backups; or voluminous e-mail attachments. Excess files can cause a server outage, increase the backup window, halt productivity or require the addition of more disks to expensive RAID subsystems. SpaceMaXX SRM report formats include, Microsoft Excel, or interactive Active HTML for drilling down to report details. For example, SpaceMaXX SRM’s HTML-based reports enable systems administrators, as well as end users, to open files, to move them, or to delete them right from the report through Microsoft’s Internet Explorer browser. The reports can also be integrated to a help desk’s intranet site or routed via e-mail to systems administrators. Najaf Husain, WQuinn’s president and CTO, says, “Typically, IT professionals aren’t aware that many gigabytes of data on their servers are pure clutter. Without SpaceMaXX SRM, these professionals would have to spend weeks manually locating files and then deleting them. SpaceMaXX SRM can reduce the clean-up process to minutes and can keep servers, desktops and NAS devices from turning into dumpsters. SpaceMaXX SRM’s storage resource management reports can identify the need for other storage functions, such as archiving files to a secondary storage device.” Visit for more information.


San Francisco, CALIF. — Pulse, developer of the leading technology for 3D Animation on the Web, announced that it will offer an exporter for Alias|Wavefront’s Maya 3D animation and visual effects software. The free Pulse Exporter will allow Alias|Wavefront users to export interactive, real-time 3D created using the Maya platform directly into Pulse and onto the Web. “Whether Maya developers are creating content for entertainment, advertising, e-commerce or e-learning, with 200 million computers equipped with the Pulse Player, Maya artists now have another professional tool to allow them to expand their audiences by creating streaming interactive content for the Web,” said Pulse Senior Vice President of Product Marketing Don Harris. “Being able to export Maya content for use on the Web quickly and easily, in the widely-accepted Pulse format, is great news for the Maya community, which is made up of some of the world’s top studios,” said Chris Ford, Sr. Product Manager for Maya of Alias|Wavefront. “The new exporter announced today by Pulse will give Maya users a great new way to bring their creations to life on the Internet.”

The Pulse Exporter will enable Maya users with the power of interactivity. Using JavaScript and Pulse’s own proprietary scripting language, Maya creators will be able to assign behaviors that allow a viewer to select the behaviors in any order using mouse or keyboard controls. When content is created in Maya, Pulse Exporter will export geometry, texture maps and animation including rigid-body animation and vertex blended deformations. Bones that control skin deformation will also be maintained and their motions preserved for the Web. The Maya exporter will be widely available in the first half of 2001. As a leading innovator of 3D graphics technology, Alias|Wavefront develops software for the film and video, games, interactive media, industrial design, and visualization markets. Its customers include Blue Sky, Digital Domain, Electronic Arts, Lucas Arts Entertainment Company, Industrial Light & Magic, Pixar, Sega, Sony Pictures Imageworks, The Walt Disney Company, Verant Interactive and Westwood Studios. Alias|Wavefront is a wholly owned, independent software company of SGI(TM) with headquarters in Toronto and technical centers in Seattle and Santa Barbara. Please visit the Alias|Wavefront web site at .


Fremont, CALIF. — While the recounts continued in Florida last week, a clear winner emerged from Comdex last week, with Maxoptix winning high praise from analysts, editors and customers for its breakthrough Optical Super Density (OSD) technology. OSD is a new ultracapacity optical storage technology that delivers new capacity and performance standards for optical disk drives. The first-generation product demonstrated by Maxoptix at Comdex provides a capacity of 26 gigabytes per 5.25-inch form factor double-sided disk. The successful Comdex demonstration fulfills the promise made by Maxoptix at last year’s Comdex show to deliver a fully operational OSD drive in preparation for first customer deliveries in 2001. The cost, capacity and performance benchmarks Maxoptix has achieved with OSD – combined with the very high reliability and durability of optical media – open up removable optical disk technology to a new range of applications that have previously spurned optical technology because of capacity and throughput limitations.

“OSD is a technology that can go far beyond ISO 5.25-inch MO technology’s capabilities, and has the potential to play an important role in network storage environments,” said Wolfgang Schlichting, research manager-removable storage at IDC. “Maxoptix has created an optical storage technology that is well positioned for areas such as network backup and archiving, data warehousing and Internet content storage.” President of Strategic Market Decisions Group John Freeman agreed, noting that OSD is a strong candidate to replace magnetic tape technology in network backup and archiving, given its high performance, low cost, extremely durable media and random accessibility. “With OSD, Maxoptix is changing the way the market perceives optical disk technology,” he said. “The stigma of moderate capacity and limited performance is gone forever.” “The reaction at Comdex from customers and other visitors exceeded our most optimistic expectations for OSD,” said Fred Bedard, senior vice president of sales and marketing at Maxoptix. “It was very gratifying to watch as people realized the unlimited prospects for OSD, as we believe it will become a new standard for network storage applications.”


San Diego, CALIF. — Luminous Networks, the industry’s first provider of carrier-class Gigabit Ethernet over Fibre metro access systems, today launched its intelligent optical solutions in Europe. Unlike other optical networking solutions, the Luminous PacketWaveTM family of carrier-class switches has been developed specifically for the metro access market and provides the intelligence necessary for operators to quickly deploy bandwidth intensive services to meet growing customer requirements. “Service providers are in the midst of a major technology and business transformation, as they upgrade their systems to carry the explosion of Internet-based traffic,” said Alex Naqvi, Luminous Networks’ president and CEO. “However, the build-out of optical networks is impeded by carriers’ ability to scale the networks to handle not only the growing IP-based data traffic, but also carrier-class voice services. Luminous is the first provider to address this urgent need for the metro optical market, with an intelligent, efficient solution.”

Bringing its technology to Europe, Luminous has established its headquarters for Europe, Middle East, and Africa (EMEA) in the UK. Headed by Dean Zagacki, Sales Director – EMEA, the new European operation will provide both sales and technical support to European telecoms and Internet service providers. Zagacki joins Luminous from Cisco Systems where he was manager of global accounts and new business. Luminous provides service providers currently building and operating MANs (Metropolitan Area Networks) with an intelligent means of delivering high bandwidth services to enterprise users. A key benefit of its intelligent optical solutions is that they allow operators to integrate legacy voice systems with IP data over a single, high-speed optical network. “To date, operators have made huge investments in boosting the performance at the core of their networks,” said Zagacki. “Innovative operators are now looking to exploit the high availability of bandwidth at the core to deliver new services to their business customers. This requires the deployment of intelligent optical solutions in the metropolitan area, taking speed-of-light communication all the way to the customer premise.” Luminous PacketWave switches make it possible for telecommunications carriers and service providers to transport large volumes of IP data traffic, along with traditional voice traffic, throughout a Metropolitan Area Network (MAN) at a fraction of the cost of existing circuit-switched technologies.


Fort Lauderdale, FLA. — DataCore Software, a leading innovator in storage virtualization software, introduced new capabilities for its SANsymphony product. The upgrade to DataCore’s flagship virtualization software, available immediately, delivers advanced features and functions designed to simplify the configuration, administration and operation of non-stop network storage resources while accelerating virtual disk performance. SANsymphony is recognized as the premier SAN management solution for consolidating mixed storage devices into a fault-resilient, centrally-managed networked storage pool. Its host-independent virtualization approach enables critical assets to be reallocated on-demand through a simple drag-and-drop function, without disruption. These powerful attributes immediately relieve application servers and their administrators from the stifling downtime and data shuffle clogging the LAN. “DataCore is the first mover in the SAN virtualization sector to deliver sophisticated data storage functions without compromising business transaction performance,” said George Teixeira, president and CEO of DataCore. “There are a lot of promises being made in this space, and DataCore is uniquely fulfilling those expectations – today. Our product is proven, easy-to-use and has already demonstrated significant cost savings and unprecedented return on investment for departmental and enterprise-wide production shops.”

New features extend SANsymphony’s intuitive management interface for consolidating and streamlining the administrative process while broadening the range of services that can be performed by the network storage pool. The new SANsymphony release provides: – Simplified configuration and administration: The software auto-discovers host port addresses, reducing the administrative effort to configure clients of the storage pool. Access to native switch-zoning interfaces are also supplied through the SANcentral GUI to implement end-to-end security over the fabric.


Waltham, MASS. — GiantLoop Network, Inc., a leader in Enterprise Optical Networking services, announced Optical Storage Networking (OSN), the first and only portfolio of professional services and managed optical networking services specifically designed for the storage applications of Global 250 companies. GiantLoop’s OSN enables large enterprises to aggregate various storage networking protocols like ESCON/FICON, Fibre Channel, Ethernet, or IP, on a single fiber-optic network. OSN is the storage-centric side of GiantLoop’s Enterprise Optical Networking. Enterprise Optical Networking aggregates data and storage networking protocols on optical networks that span campus, metropolitan, and inter-city geographies. According to Forrester Research, the Global 250 companies will add an average of 22 terabytes of new storage this year and 150 terabytes of new storage in 2003. This growth is driven by new applications, e-business efforts, and large databases for business intelligence. To protect business-critical information, many firms extend storage channels and build remote data centers to mirror storage at their primary sites, but current solutions are hampered by distance limitations, technology complexities, high costs, and inadequate service options. Optical networking promises to ease these restrictions, but optical solutions require companies to undertake a series of complex tasks like procuring dark fiber, purchasing DWDM-based optical switches, integrating storage and optical networks, and managing nascent technologies on an ongoing basis. Most firms don’t have the skills, processes, or time to embark on this large, difficult project.

GiantLoop’s Optical Storage Networking replaces these complex technology selection, implementation, training, and management tasks with a predictable, reliable, 7 by 24 service. Through its professional services and managed optical networking services, GiantLoop is a one-stop shop for assessing, planning, designing, implementing, and managing optical storage networks that support all current and future storage protocols. “Large companies are caught in a Catch 22,” said Mark B. Ward, president and chief operating officer of GiantLoop Network. “They need optical networks to mirror data and support the explosion of online storage, but don’t have the skills, time, or budgets to implement these technologies in rapid fashion. GiantLoop’s OSN is the right solution at the right time. OSN is a comprehensive storage, optical networking, data center, and operations solution that combines both professional services and managed optical services. Our skills allow us to proceed from assessment through implementation quite quickly. This speed of deployment allows our customers greater business flexibility and greater protection.”

“The coming together of storage and optical technologies will be a further catalyst to the advance of the high-performance Internet,” said Steve Pusey, president, Emerging Sales, Nortel Networks. “GiantLoop is poised to lead the advance in creating powerfully integrated optical and storage solutions. We look forward to working closely with GiantLoop to bring the power of optical storage networking into cities and businesses of all kinds.” “Optical networking adds capability but also complexity to channel extension,” said Mark Knittel, vice president of product architecture strategy and business development at Computer Network Technology Corporation. “Customers relish the benefits but are extremely leery of additional technical hurdles. By working with CNT and GiantLoop, customers can offload the complexities and get all of the advantages.”


Reading, UK — Sequoia Industrial Systems Division is helping telecommunications developers solve critical time-to-market issues by introducing the new OpenArchitect CompactPCI Ethernet switching platform from ZNYX Networks. The platform offers both standard and application-specific switch functions for network system architects, and speeds time-to-market by employing open-source Linux for switch management. The new OpenArchitect CompactPCI switch is targeted on a wide array of CarrierClass telco, embedded, ISP, Internet backbone, and enterprise applications. The first product in the series is the ZX4500 OpenArchitect, which is the industry’s most advanced Ethernet 10/100/1000 switching system implemented on a rugged, hot-swappable 6U CompactPCI subsystem. The embedded switch fabric provides line-rate service for 24 ports of 10/100BaseTX (either front or rear panel) and two ports of front-panel 1000BaseFX Gigabit Fibre, capable of switching more than 6.6 million packets per second. Sixty-four megabytes of packet buffer memory is available to resolve network congestion. The embedded Linux operating system runs on a PowerPC Motorola MPC8240 with 32MB of SDRAM and 32MB of Flash ROM. An on-board Processor-PMC slot is provided for adding either additional I/O peripherals (such as media conversion) or a second CPU for more demanding packet processing functions.

The ZX4500 OpenArchitect’s small footprint has the advantage of high density, which enables network architects to pack more functionality into limited rack space, and achieve a cost advantage over conventional Telco switching solutions. In addition, the embedded switch fabric is fully scalable, and can be quickly configured to bridge to other media types using the Processor-PMC slot. By offering a solution for features such as IEEE 802.1p Class of Service, 802.1q VLANs, and stacking of up to 720 ports of 10/100 and 30 ports of gigabit Ethernet under one management umbrella, OpenArchitect is designed to empower partners with sophisticated features that support individual requirements. The ZX4500 can be run out-of-the box with any of a number of Flash ROM images downloaded from the World Wide Web. If network architects want additional functionality beyond that found in the standard binaries, a Linux-based software development kit is available including source code to enable designers to easily generate their own. Switch applications already running on Linux or UNIX can be quickly ported to OpenArchitect, and customers can upgrade the switch in the field with new software at any time. Added functionality could include any new Layer 2 or Layer 3 protocols, Layer 2 through 7 filtering, management functions, security, media conversion, and high-availability failover configurations. Visit for more information.


Monrovia, CALIF. — ParaSoft, a leading provider of software error prevention and error detection solutions, announced the release of Jtest for Linux, a unit testing tool for Java. Java developers can take advantage of this sophisticated tool to facilitate the development of high-quality software applications on Linux. ParaSoft has been providing multi-platform development tools for C/C++, Java and Web applications for more than 7 years. The availability of Jtest for Linux comes at an opportune time when Linux is rapidly gaining credibility as a development platform. A study conducted by Evans Data Corp. in February of 2000 concluded that the expectation of deploying Linux applications by Development and IT managers in large corporations increased by 75% during the last six months of 1999 and that the percent of companies running Linux increased by 95%. These findings demonstrate the high acceptance rate of Linux in corporations. Jtest’s support for Linux was a direct response to demands from the Java development community working on Linux.

“The Linux platform is rapidly becoming a programming platform of choice for Java developers. As more developers move onto this platform, development and quality assurance tools such as Jtest will be needed,” said Thomas Chen, ParaSoft vice president of Development Tools. “Jtest is the first tool of its type available for the Linux platform.” Jtest is a fully integrated, easy-to-use, automatic class testing tool for Java. Jtest integrates every essential type of Java testing into one intuitive tool that automatically performs static analysis, white-box testing, black-box testing and regression testing. Jtest works on any Java class. Developers who want to produce top-notch Java code can use Jtest as soon as they have constructed and compiled each class of their project. Visit for more information.


Natick, MASS. — The MathWorks, Inc., a leading supplier of technical computing software, today announced the release of its new solution for test and measurement applications. The new Test & Measurement Suite offers a set of MATLAB tools and add-on products for interfacing with industry-standard data acquisition devices and instruments and for analyzing the acquired data. Supported interfaces include GPIB, VXI, and serial port standards as well as communication with popular data acquisition hardware. A key component of the solution is MATLAB 6, the foundation for the Test & Measurement Suite. MATLAB combines hundreds of advanced analysis functions, such as signal analysis, linear algebra, and basic statistics, with practical engineering and scientific graphics. Engineers and scientists can now collect data using MATLAB test and measurement tools and bring the data directly into MATLAB for fast and accurate analysis.

The Test & Measurement Suite is designed to support the entire measurement and analysis process, including interfacing with data acquisition devices and instruments, analyzing and visualizing the acquired data, and producing presentation-quality output. The suite provides new tools developed specifically for test and measurement applications: the Data Acquisition Toolbox 2 and the new Instrument Control Toolbox 1. Both products – released concurrently with MATLAB 6 – provide measurement connectivity from the MATLAB environment. Chris Vrettos, senior technical staff engineer at Marconi Medical Systems, is using MATLAB tools to develop a tester for a Computed Tomography (CT) scanner chip. An important aspect of Mr. Vrettos’ test set-up is the ability to analyze the collected data immediately. “Getting measurements directly into MATLAB is a big bonus. It avoids the multi-step process of saving acquired data to a file, and then reading the file into MATLAB. We’ve already integrated the Instrument Control beta release into various projects,” Vrettos said. The MathWorks’ Test & Measurement Suite enables users to build analysis-rich measurement systems. Both MATLAB users who need to work with measured data and Test and Measurement professionals who need proven analysis and visualization capabilities will benefit from these new tools. For example, Dr. Kobi Cohen, research engineer at I.M.I. – Rocket Systems Division, is responsible for developing test and analysis systems. Dr. Cohen used the MATLAB Test & Measurement solution for component-level design. In the stimulus-response test set-up he developed, the Instrument Control Toolbox is used to communicate with instruments and vary parameters of the test, while the Data Acquisition Toolbox simultaneously collects eight channels of data. This data is then analyzed and plotted in MATLAB. Dr. Cohen has also built a user interface for his test set-up in MATLAB to automate the process. “My test is working perfectly. I’m able to control eight instruments at the same time using serial and GPIB. I can then instantly analyze the results in MATLAB,” Cohen said. Visit for more information.


Sunnyvale, CALIF. — Marvell, a technology leader in the development of extreme broadband DSP-based mixed-signal integrated circuits for communications signal processing markets, announced its HighPHY family – the industry’s first mixed-signal read channel physical layer (PHY) devices to surpass one GigaHertz speed. “Marvell’s HighPHY devices are the first to achieve data rates of 1.2 billion bits per second, a 60% improvement over our previous record-breaking read channel solutions,” said Dr. Alan J. Armstrong, Marvell’s vice president of Marketing for the Data Storage Group. “With Marvell’s HighPHY devices, enterprise OEMs can now build even higher performance systems for Storage Area Networks (SAN), Network Attached Storage (NAS) and Redundant Array of Independent Disks (RAID) to meet the ever increasing demand for network storage.” Marvell’s HighPHY family of read channel physical layer devices, with Target-Morphing DSP technology, can adapt across all storage platforms, including mobile and desktop systems. This allows customers to maximize storage capacity and manufacturing yields, resulting in lower overall system cost.

Added Dr. Sehat Sutardja, Marvell’s president and CEO, “In May of this year, Marvell introduced our Alaska family of 0.18 micron CMOS Gigabit Ethernet physical layer devices, with the industry’s most advanced mixed-signal DSP circuitry running at 125 MHz clock rate. With the introduction of our HighPHY family of read channel devices, we have successfully extended our mixed-signal and DSP technologies to run at 1.3 GigaHertz.” Marvell’s HighPHY devices incorporate the latest advancements in DSP technology, including Target-Morphing noise-predictive Viterbi, resulting in the best error rate performance in the industry. While existing read channel devices incorporate fixed and limited Viterbi targets, Marvell’s HighPHY devices support any number of targets allowing for optimal performance with existing and future recording head and media technology. Marvell’s HighPHY family is comprised of the 88C5500 and 88C5520 read channel physical layer devices. The 88C5500 is packaged in a 100-pin 14mm x 14mm LQFP-Exposed Pad for enterprise and desktop applications, and the 88C5520 is packaged in a 64-pin 10mm x 10mm TQFP for mobile storage applications. The devices are fully pin-compatible with Marvell’s previous generation 88C5200 and 88C4200 read channel families, allowing for easy system development.


Schamburg, ILL. — InstallShield Software Corp., the leader in Internet-ready software installation and distribution solutions, announced a major advancement for software developers targeting IBM and other platforms. Immediately available are three new InstallShield Multi-Platform Edition products: InstallShield Express – Multi-Platform Edition, InstallShield Professional – Multi-Platform Edition, and InstallShield Enterprise – Multi-Platform Edition. The result of a yearlong co-development relationship with IBM, the highly-anticipated Multi-Platform Edition products (formerly known as InstallShield Java Edition) further extend InstallShield’s de facto standard installation experience into the multi-platform world. The Multi-Platform Edition products provide developers with the capability to create one powerful and consistent application installation that meets the needs of multiple platforms, including Solaris (SPARC and x86), Linux (Red Hat, Caldera OpenLinux, SuSE Linux, TurboLinux), AIX, OS/2, and Windows. With the release of InstallShield Enterprise – Multi-Platform Edition, InstallShield now also offers support for OS/400.

“Our co-development work with IBM and the release of the InstallShield Multi-Platform Edition products represent a major milestone for software authors targeting multi-platform environments,” said Stan Martin, president and chief operating officer, InstallShield Software Corp. Citing the company’s relationships with IBM, Microsoft Corp. and Sun Microsystems Corp., Martin added, “Leading operating-system vendors continually rely on InstallShield to standardize how software and other digital goods are installed, managed and used across their platforms. By working with IBM, we’re advancing the installation standard across multiple platforms, helping developers significantly reduce development costs while delivering a user-friendly installation experience.” “Organizations’ computing environments are getting increasingly complex, and as a result are more difficult to manage,” said Fred Broussard, senior research analyst at IDC. “Deployment tools need to ensure they provide the flexibility required to design a solution to distribute applications to new users easily, quickly, and with minimal installation problems.” Using InstallShield Multi-Platform Edition products, developers can address the unique requirements of each platform, eliminate redundant work, and take control of the end-user’s first experience with their software. Although Multi-Platform Edition product’s installation packages can be installed on virtually any platform that supports Java, a number of platform-specific issues keep platform-neutral installation packages from providing a simple and reliable user experience.


Mountain View, CALIF. — Mountain View Data, Inc. announced the beta release of InterMezzo 1.0 – an advanced distributed file system software solution optimized for high availability, server mirroring and backup, and content dissemination. With the launch of InterMezzo, Mountain View Data is now gearing up to provide its data management services to enterprise customers, data centers, and ISPs/ASPs throughout North America and in Asian markets. “InterMezzo brings a fundamental technology to synchronize active file systems across multiple servers in environments with intermittent network connectivity. InterMezzo is the ideal file system solution for many data replication and synchronization applications, such as content distribution and server mirroring,” said Dr. Peter Braam, CTO and EVP of Engineering at Mountain View Data. “Optimized to provide simple yet robust and scalable replication, InterMezzo is positioned to see wide acceptance for server mirroring, backup as well as for synchronizing mobile clients.”

Braam started development of InterMezzo in the fall of 1998, and since then has worked with many companies and individuals, as is common with Linux-based applications. Most recently, Tacitus Systems, based in Cherry Hill, New Jersey, contributed extensive testing and debugging to the application. “Tacitus Systems, a storage service provider, realized that a better and more cost-effective solution needed to be developed to handle our customer’s growing data needs,” said Trevor Hughes, President of Tacitus Systems. “When our team came across InterMezzo, it became very clear that InterMezzo offered the right way and most promising potential to solve our customers’ needs.”


Cambridge, MASS. — NetGenesis Corp., a leading developer of E-Metrics solutions for Fortune 1000 and Web 500 companies, announced that its flagship NetGenesis 5 E-Metrics Solutions Suite now supports AIX, IBM’s award winning UNIX platform. Companies who prefer an IBM UNIX solution can now leverage their highly scalable e-business infrastructure to manage customer relationships more effectively through NetGenesis’ e-metrics and analytics applications. NetGenesis will work closely with IBM to deploy the joint solution at enterprise class e-businesses. “NetGenesis’ Web analytics solutions enable IBM customers to understand the end results of their Web-based marketing initiatives and the overall health of their online businesses,” said Mike Kerr, vice president, IBM Web Servers. “Together, NetGenesis and IBM can now offer companies who choose an IBM UNIX solution an invaluable component for running the industry’s leading UNIX solution for profitable e-businesses.”

With over 1,000,000 IBM UNIX servers shipped, IBM is a leading provider of UNIX systems worldwide. With DB2 Universal Database, IBM has the fastest growing database business in the industry and consistently delivers the most comprehensive portfolio of data management solutions. With the NetGenesis solution, IBM UNIX customers can obtain an e-metrics and analytics solution to optimize site design, content and customer relationships. NetGenesis’ e-metrics solution is an end-to-end analytic suite that enables e-businesses to analyze e-customer activity in the context of the overall business. E-metrics is the foundation of a strong CRM solution, providing an analytic front-end and infrastructure back-end to link online and offline data and boost the bottom line. Organizations are increasingly employing multiple operating systems and databases to serve their customers that use the Web as a critical touchpoint. Consequently, analytics-based eCRM solutions need to support a variety of platforms and applications. NetGenesis has architected its solution to fit into and leverage complex environments that today’s enterprise e-businesses present. By taking an open architecture approach and leveraging one common code base, NetGenesis is able to support multiple platforms and databases, offering customers flexibility and choice with the same standard of scalability and functionality.


Foster City, CALIF. — Inktomi Corp., developer of scalable Internet infrastructure software, announced the availability of Inktomi Traffic Server 4.0. The new release extends the Traffic Server platform for the first time to the Linux operating system, providing a compelling price/performance combination for companies that have adopted this increasingly popular operating environment. Inktomi Traffic Server 4.0 software has been optimized for significantly faster performance than previous releases and features more efficient system utilization, resulting in extra processing power for the delivery of value-added services. In addition, the new version offers additional security capabilities and improved content distribution and management functionality. “Today’s announcement broadens Inktomi’s reach with one of the most flexible caching solution available to meet evolving enterprise requirements for robust Internet infrastructure within their networks,” said Ed Haslam, chief strategist, Network Products Division at Inktomi. “Supporting the widest range of platforms and data formats, Traffic Server software is optimized for both enterprises and service providers seeking to reduce bandwidth requirements, accelerate network performance and deploy a variety of edge services.”

The industry’s most scalable network cache platform for both distributed enterprise and service provider networks, Inktomi Traffic Server supports the widest range of data protocols available for increased bandwidth savings and greater control of data. As an extensible software solution, Traffic Server is the only network cache that also functions as a platform for delivering value-added services and applications at the edge of the network, including authentication, content transformation, filtering, streaming media, and virus-checking. In addition to the software-based solution, Inktomi offers full-featured Traffic Server technology in appliance form, through devices powered by the Inktomi Traffic Server Engine, from leading original equipment manufacturer (OEM) vendors including 3Com and Intel. Today, Inktomi Traffic Server delivers core enabling technology for leading service providers and enterprises, such as America Online, [email protected], Genuity and Merrill Lynch. For more information visit .


Fremont, CALIF. — VA Linux Systems, Inc. announced the availability of “SourceForge OnSite,” a ground-breaking subscription service built on the Web-based collaborative development system (CDS) powering, the world’s largest Open Source development center. Installed behind customers’ corporate firewalls, SourceForge OnSite provides a turn-key collaboration system – fully customized, implemented and supported by VA Linux Professional Services – for enterprise-class customers that want to leverage Open Source tools and methods for internal software development. Agilent Technologies’ central research lab is one of the first VA Linux customers to deploy SourceForge OnSite to support its geographically distributed development teams. The SourceForge collaborative development system is a proven, powerful platform, currently supporting more than 12,000 software projects worldwide – including XFree86, KDE, Python and MySQL – and over 92,000 registered users via

SourceForge OnSite provides an integrated toolset for centralized code, project and knowledge management in a secure environment. By deploying SourceForge OnSite, enterprise IT developers, independent software vendors and consulting firms can take advantage of significant efficiencies enabled by SourceForge within their own companies: code re-use and archives, enhanced communication within and across geographically distributed development teams, and standardization on a single toolset. As a Web-based technology, SourceForge is cross-platform, customizable and compatible with many existing development and support technologies. SourceForge OnSite features include bug tracking, patch management, task management, source control, code sharing, communication, support/issue tracking, document management, team productivity analysis and statistical reporting functionality. “SourceForge OnSite provides an enterprise-class collaborative development system that enables our customers to leverage SourceForge internally, and focus on their core-competencies rather than worrying about the maintenance and support of their development infrastructures,” said John “Tiberius” Hall, vice president of strategic planning, VA Linux Systems. “With the rapid growth and proven success of as the world’s largest ASP for Open Source developers, many companies have expressed interest in deploying a customized, supported version of SourceForge as a next-generation infrastructure enabling more efficient software development within their organizations.”


Salt Lake City, UTAH — High-detail, photospecific visualization can now be delivered via the Internet, thanks to Evans & Sutherland Computer Corp.’s (E&S) new RSWeb Server. RapidScene visualization products are used by the military and law enforcement agencies for analysis, mission planning, rehearsal, training, and security planning. Demonstrated for the first time at last week’s I/ITSEC 2000, RSWeb Server software provides the same high-quality visualization produced by RapidScene to a standard Web browser. Because RSWeb Server uses the server for rendering, multi-Gbyte databases can be viewed in detail on any computer, even a laptop. The user interface provides quick click and render positioning, adjustable settings, and the ability to view any perspective and print a scene.

The RSWeb Server software includes the Web server, Java application, and browser interface, all complete and easy to install. The software can be purchased as an add-on to the RapidScene software package or for a stand-alone server. “RSWeb Server demonstrates our commitment to providing the most advanced and updated visualization technologies available today,” said Dave Figgins, E&S Simulation Group vice president. “By taking advantage of advancements in Web technology, we can make products like RapidScene more accessible, and therefore more useful, to our customers.” RapidScene visualization products render high-resolution, photospecific databases using information from aerial or satellite photographs. These databases, which feature superb image quality and outstanding accuracy, can be used for a variety of applications, including military planning and mission rehearsal, security planning and scenario analysis, law enforcement training, and urban planning. Visit the E&S web page at .


Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour


Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This