Visit additional Tabor Communication Publications
December 15, 2010
INCLINE VILLAGE, Nev., Dec. 15 -- Frontline Systems is shipping Risk Solver Platform V10.5 for Microsoft Excel and Solver Platform SDK V10.5, the latest versions of its products featuring optimization for resource allocation, Monte Carlo simulation for risk analysis, and robust optimal decision-making under uncertainty. Free trial versions can be downloaded from http://www.solver.com.
Risk Solver Platform and its subsets are upward compatible from the basic Solver included in Excel, which Frontline developed for Microsoft, and improved in Excel 2010 for Windows and Excel 2011 for Macintosh. Users can solve problems hundreds to thousands of times larger than the basic Excel Solver, at speeds anywhere from several times to hundreds of times faster, and they can solve new types of problems with conic optimization, stochastic programming, and simulation optimization. Solver Platform SDK brings optimization and simulation power to custom applications written in C++, C#, Java and other languages, and can be used to solve Excel workbook optimization and simulation models on a server.
V10.5 includes new tools for managing solutions to multiple optimization and simulation problems, improved support for shared probability distributions, the ability to solve optimization and simulation models on supercomputing clusters and "in the cloud" with Windows HPC Server 2008 R2, and new plug-in Solver Engines with greater performance on linear, mixed integer, and nonlinear problems.
"Excel users now have access to the most powerful Solver technology available on any platform," said Daniel Fylstra, Frontline Systems' president. "They can even solve problems on their companies' supercomputing clusters, or their own temporary clusters in the cloud, without leaving desktop Excel."
Managing Solutions for Multiple Optimization and Simulation Problems
Users who build optimization or simulation models frequently want to solve many instances of these models with different data or parameters. For some time, Frontline's Solvers have made it easy to solve multiple problem instances, but as with other software, the final solutions were obtained in memory and had to be used immediately for analysis or creating reports. V10.5 introduces an XML-based solution file that makes it easy to solve multiple problem instances on any machine, save, transfer and load solutions, and use solutions for analysis and reporting on any machine where Frontline's software is running.
Solving Problems on Supercomputing Clusters or in the Cloud
Risk Solver Platform V10.5 and Solver Platform SDK V10.5 include built-in support for high performance computing (HPC) clusters running Microsoft's new Windows HPC Server 2008 R2. Risk Solver Platform includes easy-to-use menus and dialogs for connecting to a compute cluster, using it to solve problems, returning solutions to the desktop, and working with the solutions in Excel -- just as if all the computing had been done on the desktop. Both Risk Solver Platform and Solver Platform SDK can run on a cluster's compute nodes, and solve optimization and simulation problems submitted by users.
With Windows HPC Server 2008 R2 Service Pack 1, users who don't have a large compute cluster in their company can create one "on the fly" by spinning up compute nodes as virtual machines on Windows Azure, and running Solver Platform SDK on those nodes to solve optimization and simulation problems. Once the problems are solved, the virtual machines are "spun down," the user pays only for the time he or she uses them, and many IT maintenance issues simply disappear.
Improved Support for Shared Probability Distributions
In V10.5, Frontline continues its leading-edge support for shared probability distributions, used in simulation and stochastic optimization applications, stored in the DIST 1.1 format described at http://www.probabilitymanagement.org -- an emerging standard now supported by other software vendors such as Oracle and SAS. In Risk Solver Platform V10.5, it is easier than ever to produce and consume arrays of DIST-based probability distributions in a model, and use them in ultra-fast Monte Carlo simulations in Excel -- 10 to 100 times faster than most other software.
Plug-in Solver Engines Offer Best-Ever Performance
V10.5 also includes major new releases of several large-scale Solver Engines that "plug into" Risk Solver Platform, Premium Solver Platform and Solver Platform SDK. The Gurobi Solver Engine V10.5 now solves quadratic (QP) and quadratic mixed-integer (QP/MIP) problems with remarkable speed. The XPRESS Solver Engine V10.5 offers a typical 50 percent performance gain over its previous version on linear, quadratic and mixed-integer models. The KNITRO Solver Engine V10.5 offers industry-leading performance on large-scale non-linear models, and takes advantage of the Intel Math Kernel Library (MKL) to exploit multiple processor cores and advanced features of Intel's latest processors.
About Frontline Systems
Frontline Systems, Inc. is a leading developer of optimization and simulation software, and the leader in spreadsheet optimization software that helps analysts and managers optimally allocate scarce resources -- money, equipment, and people -- to realize substantial cost savings. Frontline developed the solvers/optimizers in Microsoft Excel, Lotus 1-2-3 and Quattro Pro, distributed to more than 500 million spreadsheet users. Founded in 1987, Frontline is based in Incline Village, Nev.
Source: Frontline Systems, Inc.
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.