eXludus CEO Speaks His Mind … and Then Some

By By Derrick Harris, Editor

October 3, 2005

You could say many things about eXludus CEO Benoit Marchand, but that he's afraid to speak his mind is not one of them. In this captivating interview with GRIDtoday editor Derrick Harris, eXludus CEO Benoit Marchand not only covers the usual suspects — who he is, why his company is innovative, etc. — he also expresses his views on issues like global economics, how off-shoring affects the North American economy, and the aesthetic aspects of software development.



GRIDtoday:
For starters, what is your background in Grid computing and high-performance computing (HPC), in general?

BENOIT MARCHAND: My background in Grid computing dates back from my time as a grad student at Universite de Sherbrooke and later University of Waterloo, where I was conducting research in distributed computing and, more precisely, working in distributed synchronization problems. In 1985, I wrote a thesis on the use of file servers as distributed processing gateways. That was before Apollo NCS — later known as DCE.

Then, I joined Silicon Graphics in early 1988 and was among the first few to believe we could use their new 2-way R2000 processors to do high performance computing, but realized that while RISC processing offered undeniable price advantages, code tuning remained as complex and time consuming as standard vector optimization. Moreover, SMP code parallelization then was little known and hard work. Thus, optimization complexity was a barrier to success. While managing SGI's technical pre-sales activities in Europe — including benchmarking activities — I set out to develop new code optimization methodologies, develop training materials and seminars, and eventually published a Web book — the first I believe — on that subject. We were able to reduce substantially the time required to optimize an application from several weeks to a day or so, eventually. This was key to SGI's success in HPC.

After nearly 10 years at SGI working the academic and research HPC markets, I saw an opportunity to leverage emerging Grid throughput processing technologies in the industrial sectors. I moved to Sun Microsystems and took on the role of European HPC Business Manager. We started to design, build, promote and sell our own cluster computing products. Within two years, Sun Microsystems had moved from a No. 5 position to a market leader position in European HPC sales.

So, I've been directly involved in — and hope that I have partially influenced — the evolution of distributed processing, Grid computing and HPC for the past 20 years.

Gt: What is the story on eXludus? What inspired the formation of the company, and what unique space does it fill in the Grid software market?

MARCHAND: Inspiration came from being close to my customers, in listening and in seeking out opportunities to evolve their capacities to design better products, to shorten time to design new drugs and to better predict natural catastrophes. I'm passionate about science and technology, not gadgets.

What people were telling me was simple: “We've invested a quarter billion into bio-engineer desktops, why can't we use those more rather than buying more high-end computers?”  “To do accurate aircraft structural analysis I need to scale my processing capacity to tens of thousands of processors, how can we do that?” “We've purchased the fastest network and the fastest file server, and still we can't feed enough data to our processing nodes!”

Basically, they were telling me that Grid computing, as we do it today, does not scale. For every problem there comes a time where you can't scale the problem size or the Grid size anymore; marginal performance gains diminish as things scale. Bottlenecks develop.

While my sales background was telling me, “Hmm, we can't sell upgrades here anymore,” my research background was crying, “How can we overcome the limitations?”

I left Sun in 2002 and started to work on the problem, writing patents and developing software.

Rather than trying to solve the ultimate answer to the universal question — which we all now know is “42” — I concentrated on alleviating HPTC data management queuing issues. Compared to general purpose HPC applications, all high performance throughput processing applications bear similar data transport characteristics for which it is possible to remove queuing bottlenecks by taking advantage of locality of reference principles.

So, what unique space do we fill in the Grid computing market? Without question, we fill the urgent need for clever data management tools to remove the data transport bottlenecks plaguing all Grid computing HPTC applications.

Our company name, eXludus, is Latin for “Stop playing.”

Gt: In July, the company received $1.5 million in seed financing that was earmarked to help with technology development. How is the development coming along, and when can we expect to see the first product announcement?

MARCHAND: Well, the $1.5 million was not earmarked solely for technology development. As I said earlier, I have started to work on development back in 2002 and until early 2005 I was self financing all activities. The product was well underway when we closed this round of financing. Consequently, nearly three quarters of our operation cost is devoted to market development and sales activities. This is rare among technology start-ups.

By the end of September, we will release our first two products with two more products coming out within two quarters.

Gt: Speaking of products, tell me about RepliCator, eXludus's data management software?

MARCHAND: RepliCator does what its name says: replicate data. But it also does a lot more.

Data replication simply means that all Grid processing nodes receive the same data simultaneously. This removes the need to send, resend and re-resend, etc., the same data over and over again to each requesting processing node. Indeed, it is typical that throughput processing applications share an input data set. This is one source of data transport queuing, a bottleneck that we eliminate by sharing the network connection while data is sent.

RepliCator also caches data at each processing node. Caching removes another source of data transport queuing where the same data set — or portion thereof — is reused several times by a series of jobs in throughput applications. Moreover, caching linearly increases data access bandwidth. It takes an expensive file server/disk farm/network infrastructure to sustain data transfer rates of 1 GBps to feed 10 MBps to each of 100 nodes, but the same 100 nodes have an aggregate 100 GBps data bandwidth (assuming a Gigabit network), and it's free!

RepliCator also does data staging. So, while a processing node is busy computing on Job “B,” we fill the data cache with the data set needed for Job “C” and in the background retrieve the results for Job “A,” which completed earlier. This way, processing nodes are kept busy at all times. The trick is to perform data staging efficiently. Our footprint is less than 2MB of memory and an average of 1 percent of processor utilization.

The RepliCator product suite also performs data activated processing. This innovative feature permits the automatic synchronization of data transport and job dispatch activities. Until now, everyone has been dispatching jobs and then waiting for data (data transport queuing). This was fine for supercomputers and large SMPs with direct attached storage, but large clusters and Grids have exposed the limitations of the process-centric scheduling approach. With RepliCator, the transport of data triggers the dispatch of jobs, so processing nodes don't wait.

RepliCator is fault tolerant “+”. Not only does it handle network and node failures automatically, but it also performs recovery automatically. So, upon return to normal operation, processing nodes complete the transfers they missed.

These features are unique among data management tools currently available and that's just the beginning. I can promise you amazing surprises not too far away.

So, what's the bottom line to all these features? Our aim is to demonstrate a positive ROI to every one of our customers. We recover much of the processing capacity wasted by data transport queuing and reduce the cost of support infrastructure while simplifying system management.

I told you before. I believe in technology as a means to improve the way we work and we live. Don't come to us for gadgets.

Gt: I understand you have a vision of “aesthetic and efficient design” that has not only found its way into your software design, but also in the office environment. Can you speak a little about this vision and why it's important that it is so pervasive within the company?

MARCHAND: Efficiency in software development is core to our company. Beyond the patents and other trade secrets, we derive much of our competitive edge and protection against copycats by having developed an innovative and efficient way to develop networking software.

By contrast, most software development happening nowadays is limited to “reusing” modules and tools developed by others in the past. They derive new functionalities by building on top of something else. Consequently, most software stacks have become expensive and unmanageable Babel Towers.

Only by thinking differently and developing technology “out of the box” can one expect to create truly innovative products.

Moreover, software design is an act of creativity. How can anyone expect software “artists” to be creative when their working environment looks like a dentist's office or worse, as I've seen, is in the basement of an old office building in the suburbs? In business schools strategists talk about business coherence. Well, in our case coherence starts with aesthetics. We put our developers in a cozy loft in old Montreal — near the port, restaurants and commuting facilities. We provide them with the most silent desktops we could find, remove all walls and replace them with glass so light floods everywhere and, finally, give each of them a handmade solid maple wood desk designed by a local artist. The environment becomes a constant reminder of our design efficiency goal and a challenge to excel at being creative.

It works. We've completed two products in six months with two more coming up within the next six. And our technology surpasses everything else in complexity and features.

The amazing part is that by using local artists and carefully selecting the office, we've kept cost down to the same price as if we had gone to the suburbs, rented a basement and furnished it with chain store furniture.

Gt: Can you discuss the company's concept of “economic reality?” How does this concept relate to off-shoring, a practice that seems to be picking up steam with each passing day?

MARCHAND: I come from a small town in Quebec, lost in the mountains near the Vermont border. Out there, one goes to Fred — who owns the local garage — to get his car fixed, and to Julie, the local pharmacy owner, to buy nose drops. We know that a local community may only survive through local patrons and that we're all interdependent.

It used to be that North America derived most of its economic power from its production capacity. In the name of globalization, a lot of that has been shipped overseas. “Fine,” we said, “we can maintain our economic status through our service economy.” Well, with off-shoring we're soon about to lose our edge in this sector, too. So, what will be left for North America? Lumber, grain, water, fisheries, and some oil and gas in Alaska and Canada?

After we've shipped our economic-power-generating industries elsewhere, how will the economy grow worldwide if there's no one left in North America able to buy goods? For instance, the U.S. may be a $12 trillion economy, but already $700 billion per year is wasted in the trade deficit, and it's expected to grow to $1.5 trillion in the next few years. When a nation's trade deficit growth rate exceeds its GNP growth rate, there's a problem. But when it becomes an acceptable long-term economic modus operandi, that's looking for trouble!

I strongly believe in three things. First, unrestrained global economy may only lead to global catastrophe. Second, it is immoral to take in money from customers in a region and not to reinvest most of it in that part of the world. Third, it is also immoral to benefit from abundance and not share with those less fortunate.

I get offers to offshore our R&D every week. Well, we're located in North America because this is where we intend to do most of our business for now. When we grow significant business in Europe, we'll invest in Europe, too. And when we will have reasons to invest in Asia, we'll do it without hesitation. This position, however, does not preclude providing support to those in need. Our company charter states that when we are in a position to do so, we shall find ways to support less fortunate economies through technology transfer programs, donations, etc.

If every CEO of every North American technology company gives in to the lure of easy and quick profit, we'll do the same to our economy as we did to the environment. So our children — I have three — will be out of clean water and out of jobs.

Personally, I have found some ways to maintain a profitable software development operation in North America, including the Canadian R&D tax credit program and some other Government incentive programs and management practices. Yes it's more work, but it's the right thing to do.

Gt: Wow. I'd love to delve deeper into that, but we must move on. I noticed that Wolfgang Gentzsch, a name GRIDtoday readers certainly recognize, sits on the company's Board of Directors. What is your relationship with Wolfgang, and how did he end up on the Board?

MARCHAND: Dr. Gentzsch is certainly a respected figure in the Grid Computing community and every start-up company would welcome such a prestigious name on their board of directors — we certainly do. He brings in unparalleled technical expertise, market visibility and business acumen.

However, Wolfgang is much more than that to us. He is a contagious motivator. He is an honest and moral person whose perspective on business and people we share. Stephen Perrenod — our VP of sales and marketing — Wolfgang and I were all working at Sun Microsystems, evangelizing HPC and Grid computing together. We may have not always agreed, but we learned to respect and trust each other. We share the same passion and values. Besides, Wolfgang believes in our technology and business experience.

Gt: Looking into the future, how do you see eXludus growing within the next couple of years? Where do you see the company being five or 10 years out?

MARCHAND: In the next couple of years, we envision eXludus to develop more innovative products to solve Grid computing data management problems, and there are plenty, I assure you. While we are currently concentrating on technical Grid applications, such as seismic, finance, genomics, etc., we expect to go into commercial Grid applications as well, certainly within the next two years. Our technology bears undeniable advantages in data base and Web applications.

Looking further away, we are already planning an OEM technology division. Indeed, our technology is, at the core, based on innovative telecommunication principles that are well suited to a wide variety of sectors.

I hope to maintain our focus as a software design company. Our business model is to design innovative software technologies and introduce them to the market. Growing large sales, customer service and consulting practices is not our plan. History tells us that innovative start-ups, when they grow that way, get into a state of sclerosis. My hope is to keep eXludus a start-up like business for as long as possible.

Gt: Finally, along the same line, how do you see Grid, and even cluster, technology evolving within the same timeframe?

MARCHAND:
Pervasive processing, I believe, is the trend of the future. I think everyone agrees by now — it was not so obvious in 2002 when I started — that telecom and processing technologies are converging. This pervasiveness will undeniably accelerate acceptance of Grid computing, be it in the cluster, edge or world wide form. Ultimately, one's processing needs, whatever scale they might be, will be addressed nearly in real-time by hordes of next-generation PDA/cell/desktop/TV/Web browser/e-mail/portable entertainment devices.

Sounds far fetched for you? Well, if 10 years ago I would have predicted that today you'd carry all of your personal music collection in a package size smaller than a lighter for less than $200 you would have probably laughed. iPods are already pervasive.

Gt: That's a good point. Is there anything else you'd like to add?

MARCHAND: I am very thankful to have had the opportunity to represent eXludus Technologies to GRIDtoday, as well as being given a chance to express myself on more general and personal points of view. I imagine that my points of view on economic correctness and short term maximization of profits may shock a few people. I'm sorry about that and do not intend to offend anyone. However, I believe things change and that it's about time North America wakes up and adapts.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire