Visit additional Tabor Communication Publications
December 07, 2007
Dec. 4 -- The Howard Hughes Medical Institute has appointed Dr. Vijay K. Samalam as director of scientific computing and information technology at the Janelia Farm Research Campus.
Samalam will be responsible for all local information technology, including the design, implementation, and support of the scientific computing infrastructure. Samalam will report to Cheryl A. Moore, chief operating officer at Janelia Farm.
"We are living in a golden age in biomedical research where managing and making sense of the sheer explosion of data being produced is going to require all the IT and computing tools one can muster."
"Vijay's significant expertise in running a complex IT environment in support of science and his excitement for hands-on, small group research make him an ideal fit for Janelia," said Moore. "His presence has already impacted the way we think about supporting science here; we've very pleased that he's joined us."
"I am honored to be here at Janelia where scientists are engaged in cutting edge research in neuroscience and the tools to support it," said Samalam. "We are living in a golden age in biomedical research where managing and making sense of the sheer explosion of data being produced is going to require all the IT and computing tools one can muster. I am excited to be a part of this process."
At Janelia Farm, which is located in Ashburn, Va., HHMI has created a setting where small research groups can explore fundamental biomedical questions in a highly collaborative, interdisciplinary culture. Approximately 230 resident and 100 visiting scientists will work toward two main goals: Identifying the general principles that govern how information is processed by neuronal circuits; and developing imaging technologies and computational methods for image analysis.
Samalam comes to Janelia Farm from the San Diego Supercomputer Center (SDSC), where he served as executive director and reported to the center's director, Dr. Francine Berman. At the SDSC, Samalam was responsible for all day-to-day operations of the center, which provides high performance computing services and support for scientists nationwide. Samalam was also director of the technology and research development division at the SDSC.
Prior to joining the SDSC, Samalam was vice president for architecture and chief technology officer for Lucent Technologies' core switching division, where he oversaw the development of the company's next generation optical network. He also worked for 16 years at GTE Laboratories, where he was a project leader and staff scientist. AT GTE, his research involved broadband switching, asynchronous transfer mode and internet protocol services.
In 1997, Samalam was awarded the IEEE Fred W. Ellersick Prize for the most outstanding paper published in the journal IEEE Communications.
Before entering the data communications industry, Samalam conducted research in solid state physics and taught in the physics department at the University of Florida. He has a Ph.D. in physics from the State University of New York at Stonybrook.
Samalam succeeds Marshall R. Peterson, who served as director of information technology, playing a key role in building the hardware and software infrastructure at Janelia Farm. Peterson left Janelia Farm to become chief technology officer at the National Ecological Observatory Network (NEON).
Source: Howard Hughes Medical Institute
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.