Visit additional Tabor Communication Publications
October 11, 2012
Oct. 11 — Researchers of the group High Performance Computing for Efficient Applications and Simulation (HPC4EAS) of the Department of Computer Architecture and Operating Systems of the Universitat Autònoma de Barcelona (UAB), in collaboration with the team at the Emergency Services Unit at Hospital de Sabadell (Parc Taulí Healthcare Corporation), have developed an advanced computer simulator to help in decision-making processes (DSS, or decision support system) which could aid emergency service units in their operations management.
The model was designed based on real data provided by the Parc Taulí Healthcare Corporation, using modelling and simulation techniques adapted to each individual, and which require the application of high performance computing. The system analyses the reaction of the emergency unit when faced with different scenarios and optimises the resources available.
The simulator was created by lecturer Emilio Luque, main researcher of the project; UAB PhD students Manel Taboada, lecturer at the Gimbernat School of Computer Science - a UAB-affiliated centre - and Eduardo Cabrera, trainee researcher; and María Luisa Iglesias and Francisco Epelde, heads of the Emergency Services Unit of Parc Taulí.
"Planning the use of resources available to an emergency unit staff is a complex task, since the arrival of patients varies greatly, not only during the day, but depending on the week, month, etc. That is why those in charge find it useful to have computer tools which simulate the effects of special situations, such as seasonal increases, epidemics, and so forth, in order to be able to identify the best combination of resources for each moment", Emilio Luque explains.
The most outstanding part of the simulator is the precise representation of the behaviour of individuals who were identified and their interactions. "Several tries have been made to simulate emergency services, but using other types of methodologies which did not gather enough data on a system depending on human behaviour, which is based on the relation of individuals who act more or less independently in the decisions they make. In addition to in depth knowledge of the methodology, there is also the need to have direct access to the information and data provided by the emergency services, with the aim of verifying and validating the work carried out. This data is very relevant and was not included in other simulators", Manel Taboada states.
Researchers defined different types of patients according to their emergency level, and doctors, nursing teams, and admissions staff according to different levels of experience. This permitted studying the duration of processes such as the triage (when the emergency level is determined), the number and type of patients arriving at each moment, the waiting period for each stage or phase of the service, costs associated with each process, the amount of staff needed to determine a type of assistance and, in general, all other quantifiable variables. The system not only helps to make decisions in real time, it also can help by making forecasts and improving the functioning of the service.
The complexity level of the model is very elevated: it takes into account the elements relevant for the functioning of emergency services, such as computer systems, support services for clinic diagnoses (laboratories, X-rays, etc.) and consultations made with specialists. This allows testing service resistance in case any of these elements fail.
Another advantage of the new system compared to previous models is its adaptability to all types of emergency services. "Since it is based on a very complex service as the one we have here at Parc Taulí, it is quite easy to adapt it to other hospitals through a 'tuning' process where the data is redefined", Emilio Luque explains.
For now, the simulator has been used with level 4 and 5 patients – non-urgent patients according to the definition of the Spanish Triage System (SET). These represent almost 60% of total patients being attended, based on admission zones, triage and diagnosis-treatment processes. The version currently being developed by researchers is taking into account more severely affected patients (SET levels 1, 2 and 3). In the near future, researchers aim to apply the system to other medical specialties, such as surgical areas and paediatrics.
The implementation was carried out using the Netlogo environment simulator, of demonstrated reliability and commonly used in the application of Individual-Based Modelling and Simulation Techniques in the field of social sciences.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.