Product Watch: Sony Unveils Magneto Optical Drive. Pulse Has Exporter For Maya. Maxoptix Dazzles Crowd.

December 8, 2000



San Jose, CALIF. — Sony Electronics announced its fifth generation 5.25-inch magneto-optical (MO) drive, nearly doubling the capacity of its popular 5.2GB MO drive to 9.1GB, and increasing its data transfer rate performance by up to 20 percent. The new multifunctional drive offers 14 times the capacity of the first generation 650MB MO drive, and is backward read compatible with all four previous generations of media. The internal Sony SMO-F561 MO drive offers high speed access to 9.1GB of data, archival capability, reliability and portability, making it ideal for such data-intensive applications as document and medical imaging, telecommunications, multimedia, graphics, design and audio/video editing.

“The development of 14X MO technology demonstrates Sony’s consistent technical leadership, spanning more than a decade since the introduction of the first MO drive,” said Toshi Kawai, marketing manager of MO drives for Sony Electronics’ Component Company. “We are committed to providing our customers with the capacity levels they require, while maintaining backward read compatibility with previous generations of media.” The increased capacity was achieved by using magnetically-induced super resolution (MSR) technology. MSR allows optical drives to read recorded marks on the disk that are smaller than the laser beam spot size, while reducing the potential for “cross-talk” from adjacent tracks. This technique allows for a significant increase in track density, while maintaining data integrity. This generation of MO technology also incorporates Land and Groove Recording, a technology that permits data to be recorded on the surface space of both the recording layer (Land) and the deeper spiral track (Groove). This recording approach allows for a narrower track pitch, resulting in more efficient use of the total recording surface of the disk.


San Francisco, CALIF. — Autonomy Corporation plc, a global leader in infrastructure software for the Web and the enterprise, announced the availability of a peer-to-peer implementation of its technology for the automated processing, managing and delivery of unstructured information. Autonomy’s Personal Distributed Query Handler (PDQH) implementation allows multiple networked PCs to work as one seamless system. Enabling team members to harness the combined processing and storage capacity of their personal computers with the information they own, Autonomy PDQH transparently unites resources with information to solve business critical issues from all locations. Autonomy PDQH technology automatically categorizes, tags, links and delivers documents stored on an individual’s hard drive and corporate file servers, providing a single point of access for all intellectual property in an organization. Actual processing is evenly distributed across all networked machines creating a highly scalable information exchange.

For over two years, Autonomy’s DQH technology has allowed large systems, composed of multiple servers working together and reporting to one controlling system, to appear as one, virtual system. By distributing complex systems across multiple computers, Autonomy has powered some of the highest load and largest volume unstructured information systems in the world while keeping hardware and management costs exceedingly low. Autonomy is able to provide fast and highly scalable systems that process hundreds of gigabytes of content within tenths of milliseconds. Autonomy PDQH is a true peer-to-peer version of this technology and comes to market tested and proven. To effectively leverage intellectual capital, people need instant access to both published and work-in-progress information. In both cases, access is hampered by the necessity to manually publish the documents, spread sheets, Web pages, presentations, etc. into a central repository or portal. In addition, for work-in-progress, which is rarely published, someone first must find out who is working on a particular project and then must request the information. Ultimately this is inaccurate, inefficient, and unsustainable in a large organization. The only way to effectively incorporate intellectual capital is to teach computers how to understand it. Autonomy’s software provides that intelligence. It enables computers to form an understanding of text, Web pages, e-mails, voice, documents and people’s area of expertise. Because of this ability, it automates the process of categorizing, managing, personalizing and delivering of unstructured information. In a peer-to-peer environment, this enables access to both published and un-published work without having to know who specifically is working on it. For example, an employee researching market conditions in Asia for an upcoming product launch would automatically be provided with a report that a co-worker in Hong Kong is researching about the manufacturing industry in China.


Reston, VA. — W. Quinn Associates, Inc., developers of the industry-standard StorageCeNTral quota management and disk reporting suite for Microsoft Windows NT/2000, announces the general availability of SpaceMaXX SRM, a family of storage resource management reporting tools for desktops, server-attached storage, storage-attached networks, and network-attached storage devices. SpaceMaXX SRM monitors disk space consumption at the partition level; enables usage thresholds to be set on specific drives; triggers web-based “best practice” reports at scheduled times, as requested, or as disk utilization thresholds are reached; and takes any pre-determined action to regain wasted disk space. The product family includes SpaceMaXX SRM Server, which runs on Windows NT/2000, 9x servers and NAS devices; and SpaceMaXX SRM Professional, which runs on single Windows NT/2000, 95, 98, or Millennium Edition (“ME”) desktop computers.

By running SpaceMaXX SRM’s automated “best practice” reports, IT professionals can immediately identify the reason for explosive storage growth and determine how best to control it. These reports can specifically highlight non-business Web downloads, such as MP3 music files; unused or stale files to be archived or deleted; duplicates; old desktop backups; or voluminous e-mail attachments. Excess files can cause a server outage, increase the backup window, halt productivity or require the addition of more disks to expensive RAID subsystems. SpaceMaXX SRM report formats include, Microsoft Excel, or interactive Active HTML for drilling down to report details. For example, SpaceMaXX SRM’s HTML-based reports enable systems administrators, as well as end users, to open files, to move them, or to delete them right from the report through Microsoft’s Internet Explorer browser. The reports can also be integrated to a help desk’s intranet site or routed via e-mail to systems administrators. Najaf Husain, WQuinn’s president and CTO, says, “Typically, IT professionals aren’t aware that many gigabytes of data on their servers are pure clutter. Without SpaceMaXX SRM, these professionals would have to spend weeks manually locating files and then deleting them. SpaceMaXX SRM can reduce the clean-up process to minutes and can keep servers, desktops and NAS devices from turning into dumpsters. SpaceMaXX SRM’s storage resource management reports can identify the need for other storage functions, such as archiving files to a secondary storage device.” Visit for more information.


San Francisco, CALIF. — Pulse, developer of the leading technology for 3D Animation on the Web, announced that it will offer an exporter for Alias|Wavefront’s Maya 3D animation and visual effects software. The free Pulse Exporter will allow Alias|Wavefront users to export interactive, real-time 3D created using the Maya platform directly into Pulse and onto the Web. “Whether Maya developers are creating content for entertainment, advertising, e-commerce or e-learning, with 200 million computers equipped with the Pulse Player, Maya artists now have another professional tool to allow them to expand their audiences by creating streaming interactive content for the Web,” said Pulse Senior Vice President of Product Marketing Don Harris. “Being able to export Maya content for use on the Web quickly and easily, in the widely-accepted Pulse format, is great news for the Maya community, which is made up of some of the world’s top studios,” said Chris Ford, Sr. Product Manager for Maya of Alias|Wavefront. “The new exporter announced today by Pulse will give Maya users a great new way to bring their creations to life on the Internet.”

The Pulse Exporter will enable Maya users with the power of interactivity. Using JavaScript and Pulse’s own proprietary scripting language, Maya creators will be able to assign behaviors that allow a viewer to select the behaviors in any order using mouse or keyboard controls. When content is created in Maya, Pulse Exporter will export geometry, texture maps and animation including rigid-body animation and vertex blended deformations. Bones that control skin deformation will also be maintained and their motions preserved for the Web. The Maya exporter will be widely available in the first half of 2001. As a leading innovator of 3D graphics technology, Alias|Wavefront develops software for the film and video, games, interactive media, industrial design, and visualization markets. Its customers include Blue Sky, Digital Domain, Electronic Arts, Lucas Arts Entertainment Company, Industrial Light & Magic, Pixar, Sega, Sony Pictures Imageworks, The Walt Disney Company, Verant Interactive and Westwood Studios. Alias|Wavefront is a wholly owned, independent software company of SGI(TM) with headquarters in Toronto and technical centers in Seattle and Santa Barbara. Please visit the Alias|Wavefront web site at .


Fremont, CALIF. — While the recounts continued in Florida last week, a clear winner emerged from Comdex last week, with Maxoptix winning high praise from analysts, editors and customers for its breakthrough Optical Super Density (OSD) technology. OSD is a new ultracapacity optical storage technology that delivers new capacity and performance standards for optical disk drives. The first-generation product demonstrated by Maxoptix at Comdex provides a capacity of 26 gigabytes per 5.25-inch form factor double-sided disk. The successful Comdex demonstration fulfills the promise made by Maxoptix at last year’s Comdex show to deliver a fully operational OSD drive in preparation for first customer deliveries in 2001. The cost, capacity and performance benchmarks Maxoptix has achieved with OSD – combined with the very high reliability and durability of optical media – open up removable optical disk technology to a new range of applications that have previously spurned optical technology because of capacity and throughput limitations.

“OSD is a technology that can go far beyond ISO 5.25-inch MO technology’s capabilities, and has the potential to play an important role in network storage environments,” said Wolfgang Schlichting, research manager-removable storage at IDC. “Maxoptix has created an optical storage technology that is well positioned for areas such as network backup and archiving, data warehousing and Internet content storage.” President of Strategic Market Decisions Group John Freeman agreed, noting that OSD is a strong candidate to replace magnetic tape technology in network backup and archiving, given its high performance, low cost, extremely durable media and random accessibility. “With OSD, Maxoptix is changing the way the market perceives optical disk technology,” he said. “The stigma of moderate capacity and limited performance is gone forever.” “The reaction at Comdex from customers and other visitors exceeded our most optimistic expectations for OSD,” said Fred Bedard, senior vice president of sales and marketing at Maxoptix. “It was very gratifying to watch as people realized the unlimited prospects for OSD, as we believe it will become a new standard for network storage applications.”


San Diego, CALIF. — Luminous Networks, the industry’s first provider of carrier-class Gigabit Ethernet over Fibre metro access systems, today launched its intelligent optical solutions in Europe. Unlike other optical networking solutions, the Luminous PacketWaveTM family of carrier-class switches has been developed specifically for the metro access market and provides the intelligence necessary for operators to quickly deploy bandwidth intensive services to meet growing customer requirements. “Service providers are in the midst of a major technology and business transformation, as they upgrade their systems to carry the explosion of Internet-based traffic,” said Alex Naqvi, Luminous Networks’ president and CEO. “However, the build-out of optical networks is impeded by carriers’ ability to scale the networks to handle not only the growing IP-based data traffic, but also carrier-class voice services. Luminous is the first provider to address this urgent need for the metro optical market, with an intelligent, efficient solution.”

Bringing its technology to Europe, Luminous has established its headquarters for Europe, Middle East, and Africa (EMEA) in the UK. Headed by Dean Zagacki, Sales Director – EMEA, the new European operation will provide both sales and technical support to European telecoms and Internet service providers. Zagacki joins Luminous from Cisco Systems where he was manager of global accounts and new business. Luminous provides service providers currently building and operating MANs (Metropolitan Area Networks) with an intelligent means of delivering high bandwidth services to enterprise users. A key benefit of its intelligent optical solutions is that they allow operators to integrate legacy voice systems with IP data over a single, high-speed optical network. “To date, operators have made huge investments in boosting the performance at the core of their networks,” said Zagacki. “Innovative operators are now looking to exploit the high availability of bandwidth at the core to deliver new services to their business customers. This requires the deployment of intelligent optical solutions in the metropolitan area, taking speed-of-light communication all the way to the customer premise.” Luminous PacketWave switches make it possible for telecommunications carriers and service providers to transport large volumes of IP data traffic, along with traditional voice traffic, throughout a Metropolitan Area Network (MAN) at a fraction of the cost of existing circuit-switched technologies.


Fort Lauderdale, FLA. — DataCore Software, a leading innovator in storage virtualization software, introduced new capabilities for its SANsymphony product. The upgrade to DataCore’s flagship virtualization software, available immediately, delivers advanced features and functions designed to simplify the configuration, administration and operation of non-stop network storage resources while accelerating virtual disk performance. SANsymphony is recognized as the premier SAN management solution for consolidating mixed storage devices into a fault-resilient, centrally-managed networked storage pool. Its host-independent virtualization approach enables critical assets to be reallocated on-demand through a simple drag-and-drop function, without disruption. These powerful attributes immediately relieve application servers and their administrators from the stifling downtime and data shuffle clogging the LAN. “DataCore is the first mover in the SAN virtualization sector to deliver sophisticated data storage functions without compromising business transaction performance,” said George Teixeira, president and CEO of DataCore. “There are a lot of promises being made in this space, and DataCore is uniquely fulfilling those expectations – today. Our product is proven, easy-to-use and has already demonstrated significant cost savings and unprecedented return on investment for departmental and enterprise-wide production shops.”

New features extend SANsymphony’s intuitive management interface for consolidating and streamlining the administrative process while broadening the range of services that can be performed by the network storage pool. The new SANsymphony release provides: – Simplified configuration and administration: The software auto-discovers host port addresses, reducing the administrative effort to configure clients of the storage pool. Access to native switch-zoning interfaces are also supplied through the SANcentral GUI to implement end-to-end security over the fabric.


Waltham, MASS. — GiantLoop Network, Inc., a leader in Enterprise Optical Networking services, announced Optical Storage Networking (OSN), the first and only portfolio of professional services and managed optical networking services specifically designed for the storage applications of Global 250 companies. GiantLoop’s OSN enables large enterprises to aggregate various storage networking protocols like ESCON/FICON, Fibre Channel, Ethernet, or IP, on a single fiber-optic network. OSN is the storage-centric side of GiantLoop’s Enterprise Optical Networking. Enterprise Optical Networking aggregates data and storage networking protocols on optical networks that span campus, metropolitan, and inter-city geographies. According to Forrester Research, the Global 250 companies will add an average of 22 terabytes of new storage this year and 150 terabytes of new storage in 2003. This growth is driven by new applications, e-business efforts, and large databases for business intelligence. To protect business-critical information, many firms extend storage channels and build remote data centers to mirror storage at their primary sites, but current solutions are hampered by distance limitations, technology complexities, high costs, and inadequate service options. Optical networking promises to ease these restrictions, but optical solutions require companies to undertake a series of complex tasks like procuring dark fiber, purchasing DWDM-based optical switches, integrating storage and optical networks, and managing nascent technologies on an ongoing basis. Most firms don’t have the skills, processes, or time to embark on this large, difficult project.

GiantLoop’s Optical Storage Networking replaces these complex technology selection, implementation, training, and management tasks with a predictable, reliable, 7 by 24 service. Through its professional services and managed optical networking services, GiantLoop is a one-stop shop for assessing, planning, designing, implementing, and managing optical storage networks that support all current and future storage protocols. “Large companies are caught in a Catch 22,” said Mark B. Ward, president and chief operating officer of GiantLoop Network. “They need optical networks to mirror data and support the explosion of online storage, but don’t have the skills, time, or budgets to implement these technologies in rapid fashion. GiantLoop’s OSN is the right solution at the right time. OSN is a comprehensive storage, optical networking, data center, and operations solution that combines both professional services and managed optical services. Our skills allow us to proceed from assessment through implementation quite quickly. This speed of deployment allows our customers greater business flexibility and greater protection.”

“The coming together of storage and optical technologies will be a further catalyst to the advance of the high-performance Internet,” said Steve Pusey, president, Emerging Sales, Nortel Networks. “GiantLoop is poised to lead the advance in creating powerfully integrated optical and storage solutions. We look forward to working closely with GiantLoop to bring the power of optical storage networking into cities and businesses of all kinds.” “Optical networking adds capability but also complexity to channel extension,” said Mark Knittel, vice president of product architecture strategy and business development at Computer Network Technology Corporation. “Customers relish the benefits but are extremely leery of additional technical hurdles. By working with CNT and GiantLoop, customers can offload the complexities and get all of the advantages.”


Reading, UK — Sequoia Industrial Systems Division is helping telecommunications developers solve critical time-to-market issues by introducing the new OpenArchitect CompactPCI Ethernet switching platform from ZNYX Networks. The platform offers both standard and application-specific switch functions for network system architects, and speeds time-to-market by employing open-source Linux for switch management. The new OpenArchitect CompactPCI switch is targeted on a wide array of CarrierClass telco, embedded, ISP, Internet backbone, and enterprise applications. The first product in the series is the ZX4500 OpenArchitect, which is the industry’s most advanced Ethernet 10/100/1000 switching system implemented on a rugged, hot-swappable 6U CompactPCI subsystem. The embedded switch fabric provides line-rate service for 24 ports of 10/100BaseTX (either front or rear panel) and two ports of front-panel 1000BaseFX Gigabit Fibre, capable of switching more than 6.6 million packets per second. Sixty-four megabytes of packet buffer memory is available to resolve network congestion. The embedded Linux operating system runs on a PowerPC Motorola MPC8240 with 32MB of SDRAM and 32MB of Flash ROM. An on-board Processor-PMC slot is provided for adding either additional I/O peripherals (such as media conversion) or a second CPU for more demanding packet processing functions.

The ZX4500 OpenArchitect’s small footprint has the advantage of high density, which enables network architects to pack more functionality into limited rack space, and achieve a cost advantage over conventional Telco switching solutions. In addition, the embedded switch fabric is fully scalable, and can be quickly configured to bridge to other media types using the Processor-PMC slot. By offering a solution for features such as IEEE 802.1p Class of Service, 802.1q VLANs, and stacking of up to 720 ports of 10/100 and 30 ports of gigabit Ethernet under one management umbrella, OpenArchitect is designed to empower partners with sophisticated features that support individual requirements. The ZX4500 can be run out-of-the box with any of a number of Flash ROM images downloaded from the World Wide Web. If network architects want additional functionality beyond that found in the standard binaries, a Linux-based software development kit is available including source code to enable designers to easily generate their own. Switch applications already running on Linux or UNIX can be quickly ported to OpenArchitect, and customers can upgrade the switch in the field with new software at any time. Added functionality could include any new Layer 2 or Layer 3 protocols, Layer 2 through 7 filtering, management functions, security, media conversion, and high-availability failover configurations. Visit for more information.


Monrovia, CALIF. — ParaSoft, a leading provider of software error prevention and error detection solutions, announced the release of Jtest for Linux, a unit testing tool for Java. Java developers can take advantage of this sophisticated tool to facilitate the development of high-quality software applications on Linux. ParaSoft has been providing multi-platform development tools for C/C++, Java and Web applications for more than 7 years. The availability of Jtest for Linux comes at an opportune time when Linux is rapidly gaining credibility as a development platform. A study conducted by Evans Data Corp. in February of 2000 concluded that the expectation of deploying Linux applications by Development and IT managers in large corporations increased by 75% during the last six months of 1999 and that the percent of companies running Linux increased by 95%. These findings demonstrate the high acceptance rate of Linux in corporations. Jtest’s support for Linux was a direct response to demands from the Java development community working on Linux.

“The Linux platform is rapidly becoming a programming platform of choice for Java developers. As more developers move onto this platform, development and quality assurance tools such as Jtest will be needed,” said Thomas Chen, ParaSoft vice president of Development Tools. “Jtest is the first tool of its type available for the Linux platform.” Jtest is a fully integrated, easy-to-use, automatic class testing tool for Java. Jtest integrates every essential type of Java testing into one intuitive tool that automatically performs static analysis, white-box testing, black-box testing and regression testing. Jtest works on any Java class. Developers who want to produce top-notch Java code can use Jtest as soon as they have constructed and compiled each class of their project. Visit for more information.


Natick, MASS. — The MathWorks, Inc., a leading supplier of technical computing software, today announced the release of its new solution for test and measurement applications. The new Test & Measurement Suite offers a set of MATLAB tools and add-on products for interfacing with industry-standard data acquisition devices and instruments and for analyzing the acquired data. Supported interfaces include GPIB, VXI, and serial port standards as well as communication with popular data acquisition hardware. A key component of the solution is MATLAB 6, the foundation for the Test & Measurement Suite. MATLAB combines hundreds of advanced analysis functions, such as signal analysis, linear algebra, and basic statistics, with practical engineering and scientific graphics. Engineers and scientists can now collect data using MATLAB test and measurement tools and bring the data directly into MATLAB for fast and accurate analysis.

The Test & Measurement Suite is designed to support the entire measurement and analysis process, including interfacing with data acquisition devices and instruments, analyzing and visualizing the acquired data, and producing presentation-quality output. The suite provides new tools developed specifically for test and measurement applications: the Data Acquisition Toolbox 2 and the new Instrument Control Toolbox 1. Both products – released concurrently with MATLAB 6 – provide measurement connectivity from the MATLAB environment. Chris Vrettos, senior technical staff engineer at Marconi Medical Systems, is using MATLAB tools to develop a tester for a Computed Tomography (CT) scanner chip. An important aspect of Mr. Vrettos’ test set-up is the ability to analyze the collected data immediately. “Getting measurements directly into MATLAB is a big bonus. It avoids the multi-step process of saving acquired data to a file, and then reading the file into MATLAB. We’ve already integrated the Instrument Control beta release into various projects,” Vrettos said. The MathWorks’ Test & Measurement Suite enables users to build analysis-rich measurement systems. Both MATLAB users who need to work with measured data and Test and Measurement professionals who need proven analysis and visualization capabilities will benefit from these new tools. For example, Dr. Kobi Cohen, research engineer at I.M.I. – Rocket Systems Division, is responsible for developing test and analysis systems. Dr. Cohen used the MATLAB Test & Measurement solution for component-level design. In the stimulus-response test set-up he developed, the Instrument Control Toolbox is used to communicate with instruments and vary parameters of the test, while the Data Acquisition Toolbox simultaneously collects eight channels of data. This data is then analyzed and plotted in MATLAB. Dr. Cohen has also built a user interface for his test set-up in MATLAB to automate the process. “My test is working perfectly. I’m able to control eight instruments at the same time using serial and GPIB. I can then instantly analyze the results in MATLAB,” Cohen said. Visit for more information.


Sunnyvale, CALIF. — Marvell, a technology leader in the development of extreme broadband DSP-based mixed-signal integrated circuits for communications signal processing markets, announced its HighPHY family – the industry’s first mixed-signal read channel physical layer (PHY) devices to surpass one GigaHertz speed. “Marvell’s HighPHY devices are the first to achieve data rates of 1.2 billion bits per second, a 60% improvement over our previous record-breaking read channel solutions,” said Dr. Alan J. Armstrong, Marvell’s vice president of Marketing for the Data Storage Group. “With Marvell’s HighPHY devices, enterprise OEMs can now build even higher performance systems for Storage Area Networks (SAN), Network Attached Storage (NAS) and Redundant Array of Independent Disks (RAID) to meet the ever increasing demand for network storage.” Marvell’s HighPHY family of read channel physical layer devices, with Target-Morphing DSP technology, can adapt across all storage platforms, including mobile and desktop systems. This allows customers to maximize storage capacity and manufacturing yields, resulting in lower overall system cost.

Added Dr. Sehat Sutardja, Marvell’s president and CEO, “In May of this year, Marvell introduced our Alaska family of 0.18 micron CMOS Gigabit Ethernet physical layer devices, with the industry’s most advanced mixed-signal DSP circuitry running at 125 MHz clock rate. With the introduction of our HighPHY family of read channel devices, we have successfully extended our mixed-signal and DSP technologies to run at 1.3 GigaHertz.” Marvell’s HighPHY devices incorporate the latest advancements in DSP technology, including Target-Morphing noise-predictive Viterbi, resulting in the best error rate performance in the industry. While existing read channel devices incorporate fixed and limited Viterbi targets, Marvell’s HighPHY devices support any number of targets allowing for optimal performance with existing and future recording head and media technology. Marvell’s HighPHY family is comprised of the 88C5500 and 88C5520 read channel physical layer devices. The 88C5500 is packaged in a 100-pin 14mm x 14mm LQFP-Exposed Pad for enterprise and desktop applications, and the 88C5520 is packaged in a 64-pin 10mm x 10mm TQFP for mobile storage applications. The devices are fully pin-compatible with Marvell’s previous generation 88C5200 and 88C4200 read channel families, allowing for easy system development.


Schamburg, ILL. — InstallShield Software Corp., the leader in Internet-ready software installation and distribution solutions, announced a major advancement for software developers targeting IBM and other platforms. Immediately available are three new InstallShield Multi-Platform Edition products: InstallShield Express – Multi-Platform Edition, InstallShield Professional – Multi-Platform Edition, and InstallShield Enterprise – Multi-Platform Edition. The result of a yearlong co-development relationship with IBM, the highly-anticipated Multi-Platform Edition products (formerly known as InstallShield Java Edition) further extend InstallShield’s de facto standard installation experience into the multi-platform world. The Multi-Platform Edition products provide developers with the capability to create one powerful and consistent application installation that meets the needs of multiple platforms, including Solaris (SPARC and x86), Linux (Red Hat, Caldera OpenLinux, SuSE Linux, TurboLinux), AIX, OS/2, and Windows. With the release of InstallShield Enterprise – Multi-Platform Edition, InstallShield now also offers support for OS/400.

“Our co-development work with IBM and the release of the InstallShield Multi-Platform Edition products represent a major milestone for software authors targeting multi-platform environments,” said Stan Martin, president and chief operating officer, InstallShield Software Corp. Citing the company’s relationships with IBM, Microsoft Corp. and Sun Microsystems Corp., Martin added, “Leading operating-system vendors continually rely on InstallShield to standardize how software and other digital goods are installed, managed and used across their platforms. By working with IBM, we’re advancing the installation standard across multiple platforms, helping developers significantly reduce development costs while delivering a user-friendly installation experience.” “Organizations’ computing environments are getting increasingly complex, and as a result are more difficult to manage,” said Fred Broussard, senior research analyst at IDC. “Deployment tools need to ensure they provide the flexibility required to design a solution to distribute applications to new users easily, quickly, and with minimal installation problems.” Using InstallShield Multi-Platform Edition products, developers can address the unique requirements of each platform, eliminate redundant work, and take control of the end-user’s first experience with their software. Although Multi-Platform Edition product’s installation packages can be installed on virtually any platform that supports Java, a number of platform-specific issues keep platform-neutral installation packages from providing a simple and reliable user experience.


Mountain View, CALIF. — Mountain View Data, Inc. announced the beta release of InterMezzo 1.0 – an advanced distributed file system software solution optimized for high availability, server mirroring and backup, and content dissemination. With the launch of InterMezzo, Mountain View Data is now gearing up to provide its data management services to enterprise customers, data centers, and ISPs/ASPs throughout North America and in Asian markets. “InterMezzo brings a fundamental technology to synchronize active file systems across multiple servers in environments with intermittent network connectivity. InterMezzo is the ideal file system solution for many data replication and synchronization applications, such as content distribution and server mirroring,” said Dr. Peter Braam, CTO and EVP of Engineering at Mountain View Data. “Optimized to provide simple yet robust and scalable replication, InterMezzo is positioned to see wide acceptance for server mirroring, backup as well as for synchronizing mobile clients.”

Braam started development of InterMezzo in the fall of 1998, and since then has worked with many companies and individuals, as is common with Linux-based applications. Most recently, Tacitus Systems, based in Cherry Hill, New Jersey, contributed extensive testing and debugging to the application. “Tacitus Systems, a storage service provider, realized that a better and more cost-effective solution needed to be developed to handle our customer’s growing data needs,” said Trevor Hughes, President of Tacitus Systems. “When our team came across InterMezzo, it became very clear that InterMezzo offered the right way and most promising potential to solve our customers’ needs.”


Cambridge, MASS. — NetGenesis Corp., a leading developer of E-Metrics solutions for Fortune 1000 and Web 500 companies, announced that its flagship NetGenesis 5 E-Metrics Solutions Suite now supports AIX, IBM’s award winning UNIX platform. Companies who prefer an IBM UNIX solution can now leverage their highly scalable e-business infrastructure to manage customer relationships more effectively through NetGenesis’ e-metrics and analytics applications. NetGenesis will work closely with IBM to deploy the joint solution at enterprise class e-businesses. “NetGenesis’ Web analytics solutions enable IBM customers to understand the end results of their Web-based marketing initiatives and the overall health of their online businesses,” said Mike Kerr, vice president, IBM Web Servers. “Together, NetGenesis and IBM can now offer companies who choose an IBM UNIX solution an invaluable component for running the industry’s leading UNIX solution for profitable e-businesses.”

With over 1,000,000 IBM UNIX servers shipped, IBM is a leading provider of UNIX systems worldwide. With DB2 Universal Database, IBM has the fastest growing database business in the industry and consistently delivers the most comprehensive portfolio of data management solutions. With the NetGenesis solution, IBM UNIX customers can obtain an e-metrics and analytics solution to optimize site design, content and customer relationships. NetGenesis’ e-metrics solution is an end-to-end analytic suite that enables e-businesses to analyze e-customer activity in the context of the overall business. E-metrics is the foundation of a strong CRM solution, providing an analytic front-end and infrastructure back-end to link online and offline data and boost the bottom line. Organizations are increasingly employing multiple operating systems and databases to serve their customers that use the Web as a critical touchpoint. Consequently, analytics-based eCRM solutions need to support a variety of platforms and applications. NetGenesis has architected its solution to fit into and leverage complex environments that today’s enterprise e-businesses present. By taking an open architecture approach and leveraging one common code base, NetGenesis is able to support multiple platforms and databases, offering customers flexibility and choice with the same standard of scalability and functionality.


Foster City, CALIF. — Inktomi Corp., developer of scalable Internet infrastructure software, announced the availability of Inktomi Traffic Server 4.0. The new release extends the Traffic Server platform for the first time to the Linux operating system, providing a compelling price/performance combination for companies that have adopted this increasingly popular operating environment. Inktomi Traffic Server 4.0 software has been optimized for significantly faster performance than previous releases and features more efficient system utilization, resulting in extra processing power for the delivery of value-added services. In addition, the new version offers additional security capabilities and improved content distribution and management functionality. “Today’s announcement broadens Inktomi’s reach with one of the most flexible caching solution available to meet evolving enterprise requirements for robust Internet infrastructure within their networks,” said Ed Haslam, chief strategist, Network Products Division at Inktomi. “Supporting the widest range of platforms and data formats, Traffic Server software is optimized for both enterprises and service providers seeking to reduce bandwidth requirements, accelerate network performance and deploy a variety of edge services.”

The industry’s most scalable network cache platform for both distributed enterprise and service provider networks, Inktomi Traffic Server supports the widest range of data protocols available for increased bandwidth savings and greater control of data. As an extensible software solution, Traffic Server is the only network cache that also functions as a platform for delivering value-added services and applications at the edge of the network, including authentication, content transformation, filtering, streaming media, and virus-checking. In addition to the software-based solution, Inktomi offers full-featured Traffic Server technology in appliance form, through devices powered by the Inktomi Traffic Server Engine, from leading original equipment manufacturer (OEM) vendors including 3Com and Intel. Today, Inktomi Traffic Server delivers core enabling technology for leading service providers and enterprises, such as America Online, [email protected], Genuity and Merrill Lynch. For more information visit .


Fremont, CALIF. — VA Linux Systems, Inc. announced the availability of “SourceForge OnSite,” a ground-breaking subscription service built on the Web-based collaborative development system (CDS) powering, the world’s largest Open Source development center. Installed behind customers’ corporate firewalls, SourceForge OnSite provides a turn-key collaboration system – fully customized, implemented and supported by VA Linux Professional Services – for enterprise-class customers that want to leverage Open Source tools and methods for internal software development. Agilent Technologies’ central research lab is one of the first VA Linux customers to deploy SourceForge OnSite to support its geographically distributed development teams. The SourceForge collaborative development system is a proven, powerful platform, currently supporting more than 12,000 software projects worldwide – including XFree86, KDE, Python and MySQL – and over 92,000 registered users via

SourceForge OnSite provides an integrated toolset for centralized code, project and knowledge management in a secure environment. By deploying SourceForge OnSite, enterprise IT developers, independent software vendors and consulting firms can take advantage of significant efficiencies enabled by SourceForge within their own companies: code re-use and archives, enhanced communication within and across geographically distributed development teams, and standardization on a single toolset. As a Web-based technology, SourceForge is cross-platform, customizable and compatible with many existing development and support technologies. SourceForge OnSite features include bug tracking, patch management, task management, source control, code sharing, communication, support/issue tracking, document management, team productivity analysis and statistical reporting functionality. “SourceForge OnSite provides an enterprise-class collaborative development system that enables our customers to leverage SourceForge internally, and focus on their core-competencies rather than worrying about the maintenance and support of their development infrastructures,” said John “Tiberius” Hall, vice president of strategic planning, VA Linux Systems. “With the rapid growth and proven success of as the world’s largest ASP for Open Source developers, many companies have expressed interest in deploying a customized, supported version of SourceForge as a next-generation infrastructure enabling more efficient software development within their organizations.”


Salt Lake City, UTAH — High-detail, photospecific visualization can now be delivered via the Internet, thanks to Evans & Sutherland Computer Corp.’s (E&S) new RSWeb Server. RapidScene visualization products are used by the military and law enforcement agencies for analysis, mission planning, rehearsal, training, and security planning. Demonstrated for the first time at last week’s I/ITSEC 2000, RSWeb Server software provides the same high-quality visualization produced by RapidScene to a standard Web browser. Because RSWeb Server uses the server for rendering, multi-Gbyte databases can be viewed in detail on any computer, even a laptop. The user interface provides quick click and render positioning, adjustable settings, and the ability to view any perspective and print a scene.

The RSWeb Server software includes the Web server, Java application, and browser interface, all complete and easy to install. The software can be purchased as an add-on to the RapidScene software package or for a stand-alone server. “RSWeb Server demonstrates our commitment to providing the most advanced and updated visualization technologies available today,” said Dave Figgins, E&S Simulation Group vice president. “By taking advantage of advancements in Web technology, we can make products like RapidScene more accessible, and therefore more useful, to our customers.” RapidScene visualization products render high-resolution, photospecific databases using information from aerial or satellite photographs. These databases, which feature superb image quality and outstanding accuracy, can be used for a variety of applications, including military planning and mission rehearsal, security planning and scenario analysis, law enforcement training, and urban planning. Visit the E&S web page at .


Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., is announcing a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascade Lake-AP) in t Read more…

By Tiffany Trader

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia's Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 "Accelerator Optimized" VM A2 instance family on Google Compute Engine. The instances are powered by t Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

HPCwire: Let's start with HLRS and work our way up to the European scale. HLRS has stood out in the HPC world for its support of both scientific and industrial research. Can you discuss key developments in recent years? Read more…

By Steve Conway, Hyperion

The Barcelona Supercomputing Center Offers a Virtual Tour of Its MareNostrum Supercomputer

July 6, 2020

With the COVID-19 pandemic continuing to threaten the world and disrupt normal operations, facility tours remain a little difficult to operate, with many supercomputing centers having shuttered facility tours for visitor Read more…

By Oliver Peckham

What’s New in Computing vs. COVID-19: Fugaku, Congress, De Novo Design & More

July 2, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

AWS Solution Channel

Maxar Builds HPC on AWS to Deliver Forecasts 58% Faster Than Weather Supercomputer

When weather threatens drilling rigs, refineries, and other energy facilities, oil and gas companies want to move fast to protect personnel and equipment. And for firms that trade commodity shares in oil, precious metals, crops, and livestock, the weather can significantly impact their buy-sell decisions. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time last year, IBM announced open sourcing its Power instructio Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia's Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 "Accelerator Optimized" VM A2 instance fam Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

HPCwire: Let's start with HLRS and work our way up to the European scale. HLRS has stood out in the HPC world for its support of both scientific and industrial Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

ISC 2020 Keynote: Hope for the Future, Praise for Fugaku and HPC’s Pandemic Response

June 24, 2020

In stark contrast to past years Thomas Sterling’s ISC20 keynote today struck a more somber note with the COVID-19 pandemic as the central character in Sterling’s annual review of worldwide trends in HPC. Better known for his engaging manner and occasional willingness to poke prickly egos, Sterling instead strode through the numbing statistics associated... Read more…

By John Russell

ISC 2020’s Student Cluster Competition Winners Announced

June 24, 2020

Normally, the Student Cluster Competition involves teams of students building real computing clusters on the show floors of major supercomputer conferences and Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers


Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This