IBM-led Webinar Tackles Quantum Developer Community Needs

By John Russell

July 21, 2020

Quantum computing has many needs before it can become the transformative tool many expect. High on the list is a robust software developer community, not least because developers rarely follow ‘intended use’ rules; instead, they bend and break rules in unforeseen ways that transforms new technology into applications that catalyze market growth, sometimes explosively.

In fact, efforts to fill out the quantum computing ecosystem (hardware and software) have continued to expand rapidly. Just today, for example, the Trump Administration announced the establishment of three new Quantum Leap Challenge Institutes around quantum sensing, hybrid classical-quantum systems, and large-scale quantum system development (See HPCwire coverage). Making full use of the new class of machines – whenever they arrive – will require a robust and sufficiently large quantum developer community.

Roughly a week ago, IBM held the first of a planned series of webinars on quantum computing – this one on The Future of Quantum Software Development and it was a treat. Moderated by long-time Forrester analyst Jeffrey Hammond, the panel included three prominent quantum community voices with diverse quantum expertise – Blake Johnson (control systems delivery lead, IBM, and formerly Rigetti), Prineha Narang (Harvard professor and CEO/founder, Aliro Quantum), and tech entrepreneur William Hurley (CEO/founder, Strangeworks).

It was a “glass-half-full” crowd, so consider their enthusiasm when evaluating comments, but it was also a well-informed group not dismissive of challenges.

“You have to realize where we’re actually at,” said Hurley, “We’re not in the days of AMD versus Intel. We’re in the days of like little mechanical gates versus vacuum tubes and other solutions. These things aren’t computers from my pure developer standpoint. At this moment of time they’re really great equipment for exploring the quantum landscape and are the foundation for building machines.”

Indeed, today’s quantum systems are fragile and complicated and even the notion of gates can be confusing. “At what point is a gate so complicated that a developer is never going to touch it or understand it anyway. Is that 1000 (qubits)? Is it 100,000? Is it a million because you hear people with all these stories about, you know, millions-of-qubits machines. If you had it my question is who would program it? Because that sounds really difficult based on where we’re at, and where we’re trying to go,” said Hurley.

Here are a few themes from the discussion.

  • Fast Followers Will Lose! Quantum computing’s inflection point will be such that that if you’re not already in the game, you won’t catch up. “You can’t be a fast follower. You have to either be placing a bet now or deciding to take the risk of not being involved at the point where something that is clearly a tremendous change in computing happens,” said Hurley with agreement from Johnson.
  • It Will be a Hybrid World. Yes, there will be ‘general purpose’ quantum computers although for a limited set of quantum-appropriate problems. There will also be specialization with various qubit technologies (ion trap, cold atom, superconducting, etc.) excelling on different applications. Lastly, all quantum computing will be done in a hybrid classical-quantum computing environment.
  • QA isn’t Far Away (Maybe). There was adamant agreement by panelists that a decade is too pessimistic…but there was waffling on just how soon quantum advantage (QA) would be achieved. Two-to-five years was the consensus guess-of-choice although Narang declined to make any guess. One had the sense their belief in quantum computing’s inevitable breakthrough trumped worry over when. Meanwhile the QA watch continues.
  • Expanded (Developer) Conversation Needed. The quantum conversation now is mostly between system developers who tend to be physicists and algorithm developers who tend to be physicists. That has to change. It will require better quantum computers, wider access to various qubit technologies, better tools, and a level of software abstraction that lets developers do what they do without worrying about quantum physics.

The panel discussion was casual and substantive if not technically deep, and IBM has posted a link to it. Next up is a webinar on Building a Quantum Workforce (July 28) with there are plans for another in late August (no date yet) on Commercial Use of Quantum Computers.

Clearly, each of the participating companies has its own agenda but nevertheless the give-and-take had an insider feel.

IBM, of course, is the biggest player in quantum computing today with deep expertise in hardware and software and its IBM Q network which offers various levels of access to quantum resources. IBM quantum systems use semiconductor-based superconducting qubits. Panelist Johnson is a relatively recent IBM import from Rigetti. His work is fairly deep in the weeds and focuses on control systems which convert conventional instructions (electrical signals) into quantum processor control signals.

Aliro and Strangeworks are start-ups focused on software.

Aliro describes its offering as a “hardware-independent toolkit for developers of quantum algorithms and applications. The development platform is implemented as a scalable cloud-based service. Features include: access to multiple QC hardware vendors and devices via an intuitive GUI as well as REST API; quantum circuit and hybrid workflow visualization and debugging tools; cross-compilation between high and low-level languages; and hardware-specific optimizations enabling best execution on every supported hardware platform.” CEO Narang is also an assistant professor of computational materials science at Harvard.

Strangeworks says, its “platform is a hardware-agnostic, software inclusive, collaborative development environment that brings the latest advancements, frameworks, open source, and tools into a single user interface.” CEO Hurley is a veteran tech entrepreneur who also chairs the IEEE Quantum Computing Work Group.

These are early days for both of these young companies and they are broadly representative of growing number of start-ups seeking to fill out the quantum computing ecosystem. It is probably best to watch/listen to the conversation directly to get a sense of issues facing software development in quantum computing, but also to gain a glimpse into the mindset of the young companies entering the quantum computing fray.

 

Presented here are a few soundbites (lightly edited) from the panel.

WHAT’S THE STATE OF QUANTUM PROGRAMMING AND WHAT ARE SOME OF THE CHALLENGES?  

Johnson: Recognizing that there was something maybe intimidating about quantum, IBM chose to develop first a graphical interface, a graphical drag and drop way to build quantum circuits where they show the kind of the fundamental unit of quantum compute. So that’s what’s available today in IBM quantum experience. You can drag and drop gates, which are the logical operations and manipulate qubits, which are quantum bits, to build up a program. For more real tasks, you need a real programming interface and we have Qiskit, an open source computing framework developed by IBM which is a Python interface for building quantum circuits and for building algorithms that take advantage of quantum processors.

Narang: What I like about Qiskit is that it’s very accessible. The challenge is, with that abstraction, you lose a lot of the control over the actual hardware, you don’t necessarily have all of the tools to directly program the system. So the pulse level control that IBM has made available is a good way to bridge that. I wonder, as we go towards other types of hardware, how some of the programs that are written for superconducting circuits will be translated to those (other types of hardware) and if everything is not based off of the same pulse level control scheme, what would be a good way of translating? And I don’t have an answer to this.

Hurley: Our approaches is to let all of the languages battle it out. We’re big Qiskit fans. I say that not because we’re on IBM, but we first started [there because] there were already tons people working with it. We’re big supporters, soon to be making our first contributions to it. But you can’t take a developer and make them a quantum developer overnight. Some things that are fundamentally different. For example, if I’m programming on any other platform that’s possible in the world, and I run into that error, I can find it or I know how it works. Whereas [with quantum] what we see happen with developers is they get in and they can instantiate a teleportation thing through Microsoft Quantum Katas or IBM Qiskit, or whatever. Then the moment it breaks, if they don’t understand the fundamental physics behind it, they’re at a dead end.

Narang: A lot of things are not yet possible with quantum hardware but we take for granted in a classical computer. [I’m] thinking about conditional statements and intermediate measurements, things that are not trivial to do in a quantum circuit at the moment, but that’s going to be very important to write more complex quantum programs in future. As those advances come from the hardware side, we think about how to translate those into something that you can use on the on the software side. 

GIVEN THE NASCENT STAGE OF QUBIT TECHNOLOGIES – SUPERCONDUCTING, TRAPPED ION, COLD ATOMS, ETC. – WILL THERE BE SPECIALIZED MACHINES?

Narang:  Tricky question. I’m trying to see how to answer it without making all of my colleagues angry at me. My personal view is there will be certain problems that will run just fine on a variety of hardware. And some that might be more specialized to particular types of hardware and that’s just associated with a physics that is underlying that type of hardware. This will be especially important as we try and map problems from condensed matter and chemistry onto some of these devices. We’ll see that not every technology is ideal or even possible for all kinds of problems. But we don’t have that kind of experience yet.

Photo of IonQ’s ion trap chip with image of ions superimposed over it. Source: IonQ

[Currently] there are only a few different trapped ion systems out there. It’s very hard to get access to those and not many of them have gone the route that IBM did with making at least at the smallest devices available very broadly. So there’s a whole lot more that needs to be done on a simulator before you can actually try it on real hardware. And of course, systems like cold atoms or photonic circuits are really niche. Getting time on those is very expensive and almost unaffordable for the average developer. The best you can hope to offer to them is to say, “Hey, if you write something that works on this genre of systems, we can get you to a point where it runs on other systems” and that’s something that my group is trying to accomplish at the moment.

Hurley: On my desk [I have] iPhone and iPad and a 16-inch laptop, okay? And those things can all do email and they can all surf the web and they can do them really well. But I can’t open Xcode and compile a big program on my iPhone. It’s going to be like that in quantum and it’s going to take it in directions that none of us can imagine it would be foolish to try. There’s already 16 startups I’m following who are making quantum processors for specific applications exactly like Pri described. Look at supercomputing today and high performance computing. There are computers built in on Wall Street just for doing trades or [even] specific types of trades. We’ve seen this throughout computing history, right? I don’t think quantum will be any different. I hope that there’s as many hardware and software solutions available as possible

WHEN WILL WE ACHIEVE QUANTUM ADVANTAGE?

Johnson: The problem is we don’t know exactly how powerful a quantum machine needs to be in order to get the quantum advantage. We’ve committed to and have been on a track of doubling quantum volume[i] (broad performance metric) every year. We put up and made publicly available our first machine with a quantum volume of 32 back in April. We now have eight machines with quantum volume of 32 that are available in IBM quantum experience, and we continue to march along that path of doubling QV [yearly]. At what point is it a powerful enough machine for quantum advantage? I’m not sure. I would say personally, I’d be surprised if we have to get all the way to fault tolerance to find a single application where you can do something with quantum advantage, whether that’s time-to-solution, cost or whatever, against classical resource.

Hurley: If you’re in this industry, if you want to be a developer in this industry, it needs to be a long-term play, a long-term vision that you have. Pessimists think [quantum payoff is] 20 years out, the optimistic [thinks] it’s three years out, and the reality is if you want to be involved in it than you should be preparing now, because all I can tell you is at some point between tomorrow and some future tomorrow it will happen and the inflection point will be steep.

A rendering of IBM Q System One, the world’s first fully integrated universal quantum computing system, currently installed at the Thomas J Watson Research Center. Source: IBM

Johnson: Those of us that build hardware understand that the most critical thing that’s preventing us from reaching quantum advantages is the hardware, and so our first tools are really focused at those domain experts to give them the tools they need to build better hardware. That’s an important audience that we like and we will continue to make better and better tools to serve that audience. But as Pri mentioned, people are doing applications, research with the devices that exist today, and are finding they maybe can’t yet solve a system problem better than they could with a classical resource, but they can solve problems. They’re starting to figure out what the limitations are, how they can squeeze out the most utility out of the devices today, and then getting ready for the devices that exist tomorrow.

So we’re starting to build tools that try to lower the barriers to entry for those people, not the domain experts, but a new audience. We don’t want them to have to learn everything about quantum computing in order to be able to get started. The idea here was to try to reach out to this new audience of developers [such that] they can they can write their programs by describing the problem that they want to solve, in this case, an optimization problem. So they can write down a quadratic formula, they write down some description of the constraints of their optimization problem. They can choose different solvers in both quantum solvers and classical solvers, because a lot of these developers are trying to [understand the value today and [how] quantum works versus the classical.

Hurley: Looking at it from a pure 30 years of seeing new tech coming down and doing the development, I think 10 years sounds very pessimistic now. What most people imagine as a quantum computer in 10 years? Is it a full general purpose machine, whatever? Who knows? But I don’t think you’re going to wait 10 years, I think it’s more in the two-to-five-year range to where there are things that start to become economically advantageous to enterprises and it will probably be in chemistry and material science.

HOW BIG MUST THE DEVELOPER COMMUNITY BECOME AND WHAT’S THE KEY TO DOING THAT?

Hurley: Most of the people building machines aren’t actually talking to developers. They’re talking to physicists who can download development tools and they’re playing the role of developers. Software developers are not necessarily the greatest physicists and physicists are not all the greatest software developers, right? [We need] to drive it to a point where it’s not 200 people who can program the machines; it’s 2 million people. I believe this is the first real leap, then in the next 10 years computing will change more than it has in the last hundred.

Johnson: Open source, I think, is a critical piece to accelerating these kinds of developments because we want these tools to be available to the broadest audience possible. Open source is a great accelerator for making that happen. [But even] on the fastest timescale, we’re a long way off from the App Store experience where on my phone, I can get an app that takes advantage of current resources. But I think we’re not that far away from the developer equivalent of that, which is, you know, package management systems where I can just say, pip install or brew install some package, which is a quantum library for some application domain. The goal is to have like the equivalent of iPhone experiences today.

If I’m gonna an iPhone developer and I want to develop a new app that uses like the augmented reality app to check the position of a basketball, that itself is a non-trivial machine vision task, right? But we don’t ask every iPhone developer to be a machine vision expert. They just they plug in into Expo that they want to use with AR kit, which is Apple’s augmented reality solution. And off they go. We need to get to that point and I don’t think it’s that far away, where a developer can use quantum resources without having to be an expert in quantum computing. 

Google’s Sycamore quantum chip

Hurley: If you think back to 2007, there are 400 of us a week after [the iPhone introduction] with iPhones and people hacking on them. It went to 10s of thousands almost instantly, right? Within six months to a year, and then over the course of 10 or 11 years, you get to 23 million people who are doing that. That mass of developers being involved drives apps exactly as Blake just said. You have to have a mass of developers to do that. That’s where quantum computing faces its biggest challenge, when it gets what I call out of the lab into the real world and all of a sudden there’s a million developers. Because developers rarely use things in the exact way they were intended to be used. They will find more uses, they will find the bugs, they will find the weaknesses. So the faster we can get to that point the better. I mean, Pri what are your thoughts?

Narang: We’re taking some of the circuits run on silicon superconducting to trapped ion and realizing that some so don’t work the same way. And yeah, you forget about developers breaking things in new ways, even experienced people break things in ways they didn’t anticipate, and have to call an engineer and say, hey, how do I fix this? If we expect, you know, a million developers entering the community to have to get answers from an expert engineer, that’s probably not a very scalable model. Something that could be useful is having better simulators that allow you to replicate some of the noise associated with current hardware, to see how things are performing? Also, simple stuff like getting runtime estimates. Getting a yea or nay on if your circuit going to actually fit on the device you’re trying to run it on? That’s a problem I’ve seen a lot of people have. They have a beautiful idea. And they assume that it can run on this really tiny device. I think there’s different levels to how do we make it easier for developers who are entering the field.

Link to panel video, https://www.youtube.com/watch?v=fBP6qTc_fGU&feature=youtu.be

[i] Quantum Volume (QV) is a hardware-agnostic metric that we defined to measure the performance of a real quantum computer. Each system we develop brings us along a path where complex problems will be more efficiently addressed by quantum computing; therefore, the need for system benchmarks is crucial, and simply counting qubits is not enough. As we have discussed in the past, Quantum Volume takes into account the number of qubits, connectivity, and gate and measurement errors. Material improvements to underlying physical hardware, such as increases in coherence times, reduction of device crosstalk, and software circuit compiler efficiency, can point to measurable progress in Quantum Volume, as long as all improvements happen at a similar pace. https://www.ibm.com/blogs/research/2020/01/quantum-volume-32/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a p Read more…

By John Russell

With Optane Gaining, Intel Exits NAND Flash

October 21, 2020

In a sign that its 3D XPoint memory technology is gaining traction, Intel Corp. is departing the NAND flash memory and storage market with the sale of its manufacturing base in China to SK Hynix of South Korea. The $9 Read more…

By George Leopold

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing another major EuroHPC design win. Finnish supercomputing cent Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a variety of observatories and astronomers – but when COVID Read more…

By Oliver Peckham

AWS Solution Channel

Live Webinar: AWS & Intel Research Webinar Series – Fast scaling research workloads on the cloud

Date: 27 Oct – 5 Nov

Join us for the AWS and Intel Research Webinar series.

You will learn how we help researchers process complex workloads, quickly analyze massive data pipelines, store petabytes of data, and advance research using transformative technologies. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with the enterprise strengths of its recent acquisitions, notably Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

ROI: Is HPC Worth It? What Can We Actually Measure?

October 15, 2020

HPC enables innovation and discovery. We all seem to agree on that. Is there a good way to quantify how much that’s worth? Thanks to a sponsored white pape Read more…

By Addison Snell, Intersect360 Research

Preparing for Exascale Science on Day 1

October 14, 2020

Science simulation, visualization, data, and learning applications will greatly benefit from the massive computational resources available with future exascal Read more…

By Linda Barney

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This