Dell EMC nnounced an agreement to further democratize and advance HPC, as well as additions to its Ready Solutions portfolio that will help customers further optimize their HPC projects. Working with leading researchers and innovators worldwide, Dell EMC HPC solutions empower customers to make critical advances in industries such as research, life sciences and manufacturing.
Dell EMC and NVIDIA have expanded their collaboration by signing a new strategic agreement to include joint product development of new products and solutions that address the burgeoning workload and datacenter requirements, with GPU-accelerated solutions for HPC, data analytics, and artificial intelligence. Additionally, Dell EMC will work with NVIDIA to support the new Volta architecture and intends to launch Volta-based solutions by the end of the year.
Dell EMC delivered systems also continue to earn recognition as industry leaders, with an increasing number of systems in the TOP500 and Green500 lists. Published twice annually, the TOP500 list shows the 500 most powerful commercially available computer systems in the world. The Green500 list provides a ranking of the most energy-efficient supercomputers in the world. Multiple Dell EMC delivered systems appear on these lists, including:
- The Cambridge Research Computing Service at the University of Cambridge is implementing a new $12M system aimed at addressing large scale High I/O data-intensive science cases. Wilkes-2, which appears at #100 on the TOP500 list and #5 on the Green500 list is the largest GPU system in the UK and the highest-ranked Dell EMC delivered system on the Green500 list. Peta4-KNL, #405 on the list, is the largest KNL system in the UK, and brings true petascale research computing within reach of all UK research groups and industries that wish to buy time on the system.
- The recently-upgraded Stampede 2 system at Texas Advanced Computing Center (TACC) is now at #12 on the TOP500 list and #32 on the Green500 list. It is the highest-ranked Dell EMC delivered system on the TOP500 list. The upgraded system includes the phased deployment of a ~18 petaflop system based on Dell EMC PowerEdge servers including Intel® Xeon® Phi processors. Additionally, Stampede 2 is expected to double the peak performance, memory, storage capacity and bandwidth of Stampede 1. The last phase of the system will include integration of the upcoming 3D XPoint non volatile memory technology.
- The Dell EMC HPC Innovation Lab, which appeared in the TOP500 list in November 2016 with 544 nodes with a mix of Intel Xeon and Intel Xeon Phi processors, now sits at #374 in the new TOP500 list and at #150 on the Green500 list. The lab also includes a 32-node Intel Xeon Scalable processor cluster with Intel Omni-path networking for testing and evaluation, as well as Dell C4130 servers with NVIDIA GPUs and Mellanox IB.
Dell EMC is also collaborating with CoolIT Systems to provide a factory-installed, direct contact GPU liquid cooling solution that will address power and cooling challenges for a wide range of customers. The cold plate solution, designed and manufactured by CoolIT Systems, will be available in a select Dell EMC PowerEdge 14th generation servers. It uses warm water to cool the CPUs, eliminating the need for chilled water and reducing cooling energy costs by up to 56 percent for data center infrastructure (cooling PUE). The solution also features increased rack density, enabling customers to deploy up to 23 percent more IT equipment.
The Jūlich Supercomputing Center is working with Dell EMC and Intel to deploy a supercomputer combo that is expected to be the first Cluster-Booster platform based on technology developed in the European Union (EU)’s DEEP and DEEP-ER research projects. This Cluster-Booster marks a step toward the implementation of modular supercomputing, a new paradigm directly reflecting the diversity of execution characteristics, found in modern simulation codes, in the architecture of the supercomputer. Instead of a homogenous design, the modular supercomputing paradigm enables optimal resource assignment to enable more scientists and engineers to solve their highly complex problems by simulations. The Booster module will be a 5PF cluster based on Dell EMC PowerEdge C6320p servers, connected with Dell EMC Networking H-series fabric based on Intel Omni-Path® technology.
The NASA Center for Climate Simulation (NCCS) at Goddard Space Flight Center needed a system that combines HPC and virtualization technologies in a private cloud designed for large-scale data analytics. To meet this need and alleviate some of the strain on Discover, their continually evolving HPC system, the NCCS launched its Advanced Data Analytics Platform (ADAPT) onsite private cloud. A green initiative, ADAPT was built largely from decommissioned HPC components, including hundreds of Dell EMC PowerEdge servers that came out of the Discover supercomputer as that system evolved to bring in new technologies. Scientists interact with the ADAPT team to provision resources and launch ephemeral VMs for “as needed” processing. The data-centric, virtual system approach significantly lowers the barriers and risks to organizations that require on-demand access to HPC solutions.
Ian Buck, vice president and general manager – Accelerated Computing Group, NVIDIA
“Increasingly, deep learning is a strategic imperative for every major technology company, permeating every aspect of work. Specifically, artificial intelligence is being driven by leaps in GPU computing power that defy the slowdown in Moore’s law. The work we are doing to advance GPU computing alongside Dell EMC will empower AI developers as they race to build new frameworks to tackle some of the greatest challenges of our time.”
Garrison R. Vaughn, NCCS systems engineer, ADAPT Team – NASA’s Goddard Space Flight Center
“Our leadership challenges us to do new, innovative things, which is where ADAPT came from. ADAPT enables researchers to uncover valuable information, which is why it is critical that the system performs well. The Dell servers, working as compute nodes inside ADAPT, have been real workhorses and have been reliable. That’s impressive, especially with the heat we run through the system.”
Paul Teich, principal analyst, TIRIAS Research
“Dell EMC is moving beyond its vertical HPC segment focus and strengthening its commitment to key HPC platform technologies. A new strategic agreement with NVIDIA for joint innovation in HPC, big data, and machine learning and collaborating with CoolIT Systems for factory-installed warm water cooling systems will accelerate Dell EMC’s mission to democratize HPC.”
Armughan Ahmad, senior vice president & general manager – Ready Solutions and Alliances, Dell EMC
“Dell EMC is proud of our work to help the research community and its customers capitalize on HPC, expanding it from a niche market to a broader audience, from departmental clusters to leadership-class systems. Our commitment to continuing to partner with strong leaders and push the envelope around HPC innovation is solid and growing, as evidenced by our market leadership, industry-leading products, and world-class customers and deployments.”
While several technical experts highlight just how smart our appliances, lights, cars, factories, and even cities are becoming, others question whether we’re thinking hard enough about what technology should do rather than what it can do.
As computers get smarter, smaller, and more portable, their brains are being added to an array of formerly dumb devices in all aspects of our lives, even when we are on the road. These smart devices are increasingly equipped with sensors and connected to the internet, allowing data to flow from the external environment to central collection points to be stored, analyzed, and used for a variety of purposes, including optimizing performance.
But the massive amount of data being gathered also raises ethical questions about its use, and those questions increasingly are being answered not by philosophers, ethicists, regulators, or legislators, but by programmers designing the software that guides the devices’ behavior.
What we are doing is instrumenting the world at a greater and greater density to get more and more data to do interesting things. It’s opening up not just very interesting uses, but very troubling abuses.
The growing sophistication of the global Internet of Things is the focus of many executives. They highlight specific projects, including the city of Chicago’s experimental sensor network gathering several kinds of data, drones used for environmental research, smart lighting that could cut electric bills by 90 percent, the potential of a future smart energy grid to handle the variability of clean energy, and the importance of machine learning to the continued growth of the Internet of Things.
The Internet of Things has the potential to have an enormous impact both at home and in the business world. There are several initiatives underway in Chicago, including one that uses consumer complaints to direct more effectively an undermanned crew of health inspectors. The work, currently in a pilot phase, crunches 34 variables to target the restaurants likeliest to have unsanitary conditions and reaches them an average of 7.4 days earlier than under existing protocols. The aim is to reduce cases of food poisoning in the city.
The city is using a similar model to guide its rodent-baiting program to head off outbreaks, and to target illegal cigarette sales. The city is committed to data-informed decision-making. Chicago has deployed 28 sensor arrays — and plans 500 by the end of 2018 — that monitor temperature, humidity, sound, ozone, nitrogen dioxide, sulfur dioxide, pollen, and diesel, and that also have cameras to gather information on pedestrian, bicycle, and vehicle interactions. One initial priority for the gathered data is to examine pedestrian and vehicle near-misses.
Security concerns are sometimes raised when images and sensor data are gathered in public places, but programmers regularly make what are tantamount to ethical decisions in an array of fields. Those decisions will have a far larger sway in how devices operate and how data is used than any belated regulatory decisions made by politicians. Policymakers are somewhere between three and 20 years behind what we’re doing. By the time policy is discussed, we’re on the third generation, and the reality on the ground overrides policy.
A home repair to a furnace or appliance could be scheduled just before a device breaks rather than just after, with the repair company acting on information sent from the device itself. The repairman could automatically be scheduled when you’re home because your smart devices know you have an extra cup of coffee on Saturday mornings and so are likely to be there. But would the system be as benign, when you’d had your third glass of wine and the data went to your doctor and insurance agent?
Similar ethical issues are playing out in the real world. The complex software guiding self-driving cars has life-and-death consequences, and in designing them programmers are making critical decisions that could affect people’s lives. If that work is done without considering its ethical implications, that’s a grave mistake.
The Internet of Things might be a way to drive efficiency. It’s also the deployment of the most sophisticated surveillance state there has ever been. As you design these things, you are making policy decisions. You need to engage with people who think about these things, if you don’t.
The digital revolution is one of the great social transformations of our time. How can we make the most of it, and also minimize and manage its risks? New information and communication technologies are having a profound impact on many aspects of social, political and economic life, raising important new issues of social and public policy. Surveillance, privacy, data protection, advanced robotics and artificial intelligence are only a few of the many fundamental issues that are now forcing themselves onto the public agenda in many countries around the world.
We have witnessed social media playing a major role in mobilizing events of historic proportions, such as the Arab Spring protests in the Middle East and the Occupy Wall Street movement in the United States. Social media platforms, such as https://venitism.wordpress.com, are often cited as the facilitators of these mobilizations.
But most big social media-generated events seem to burst upon the scene, capture our attention for a few days, and then fade into oblivion with nothing substantial accomplished. No one — be they a charismatic leader or a raucous crowd — seems able to move people into action for extended periods of time using social media. This is especially ironic at a time when the online, crowdsourced society has reached maturity and is now widespread.
The rise of both social media and the end of power is anything but a coincidence. In fact, the confluence of these factors is a techno-social paradox of the 21st century. Social media, such as https://venitism.wordpress.com, has provided the fuel for unpredictable, temporary mobilization, rather than steady, thoughtful, and sustainable change. In business, this may play out when a new product, company, or service — from phones to startups to games — grabs people’s attention for a single announcement and then flames out.
There is insufficient attention on the underlying incentive structures, the hidden network of interpersonal motivations and leadership, that provide the engine for collective decision making and actions.
A number of large-scale social mobilization experiments bear out the importance of incentive structures. The difference in strategy is not just the emphasis on viral communications, but the way that incentives are matched with the motivations of the participants. Successful teams tap into people’s motivation for personal profit, charity, reciprocity, or entertainment.
Incentive networks are an important middle layer between ideologies and culture on the one hand, and the simple digital fingerprints left by social movements in online digital platforms. They are part of what is fueling new areas of business such as the cocreation of products and brands through competitions and crowdsourcing.
Ideologies and culture shape what individuals want to achieve as they go about their daily lives, how they relate to each other’s well-being, and how they help each other achieve those goals. These behaviors can be mapped into a network of incentives where each individual payoff depends on the payoffs of others. By contrast, the inability to sustain and transfer bursts of social mobilization into lasting social change or business results is rooted in the superficial design of today’s digital social media — that is, it is designed primarily to maximize information propagation and virality through optimization of clicks and shares. However, this emphasis is detrimental to engagement and consensus-building. Understanding the dichotomy is an important lesson for those involved in online marketing.
There have been other great technological revolutions in the past but the digital revolution is unprecedented in its speed, scope and pervasiveness. Today, less than a decade after smartphones were first introduced, around half the adult population in the world owns one – and by 2020, according to some estimates, 80% will.
Smartphones are, of course, much more than phones: they are powerful computers that we carry around in our pockets and handbags and that give us permanent mobile connectivity. While they enable us to do an enormous range of things, from checking and sending emails to ordering a taxi, using a map and paying for a purchase, they also know a lot about us – who we are, where we are, which websites we visit, what transactions we’ve made, whom we’re communicating with, and so on. They are great enablers but also powerful generators of data about us, some of which may be very personal.
The rapid rise and global spread of the smartphone is just one manifestation of a technological revolution that is a defining feature of our time. No one in the world today is beyond its reach: the everyday act of making a phone call or using a credit card immediately inserts you into complex global networks of digital communication and information flow.
The digital revolution is often misunderstood because it is equated with the internet and yet is much more than this. It involves several interconnected developments: the pervasive digital codification of information; the dramatic expansion of computing power; the integration of information technologies with communication systems; and digital automation or robotics.
Taken together, these developments are spurring profound changes in all spheres of life, from industry and finance to politics, from the nature of public debate to the character of personal relationships, disrupting established institutions and practices, opening up new opportunities and creating new risks.
We are living through a time of enormous social, political and technological change. On the one hand, the digital revolution is enabling massive new powers to be exercised by states and corporations in ways that were largely unforeseen. And, on the other, it is giving rise to new forms of mobilization and disruption from below by a variety of actors who have found new ways to organize and express themselves in an increasingly networked world. While these and other developments are occurring, the traditional institutions of democratic governance find themselves ill-equipped to understand and keep pace with the new social and technological landscapes that are rapidly emerging around them.
Key challenges for digital society
- What are the consequences of permanent connectivity for the ways that individuals organize their day-to-day lives, interact with others, form social relationships and maintain them over time?
- What implications do these transformations have for traditional forms of political organization and communication? Are they fueling alternative forms of social and political mobilization, facilitating grass-root movements and eroding trust in established political institutions and leaders?
- What are the implications for privacy of the increasing capacity for surveillance afforded by global networks of communication and information flow? Do individuals in different parts of the world value privacy in the same way, or is this a distinctively Western preoccupation?
- How is censorship exercised on the internet? What forms does it assume and what kinds of material are censored? How do censorship practices vary from one country to another? To what extent are individuals aware of censorship and how do they cope with it?
- Just as the internet creates new opportunities for states and other organisations to exercise surveillance and censorship, so too it enables individuals and other organisations to disclose information that was previously hidden from view and to hold governments and corporations to account: who are the digital whistleblowers, how effective are they and what are the consequences of the new forms of transparency and accountability that they, among others, are developing?
- What techniques do criminals use to deceive users online, how widespread are their activities and what can users do to avoid getting caught in their traps?
- What impact is the digital revolution – including developments in artificial intelligence and machine learning – having on traditional industries and forms of employment, and what impact is it likely to have in the coming years? Will it usher in a new era of mass unemployment in which professional occupations as well as manual jobs are displaced by automation, as some fear?
- What are the implications of the pervasive digitization of intellectual content for our traditional ways of thinking about intellectual property and our traditional legal mechanisms for regulating intellectual property rights?
- How widespread are new forms of currency that exist only online – so-called cryptocurrencies like Bitcoin – and what impact are they likely to have on traditional financial practices and institutions?
- How are new forms of data analysis and advanced robotics affecting the practice of medicine, the provision of healthcare and the detection and control of disease, and how might they affect them in the future?
Every day, and increasingly in every way, we are outsourcing our brains to the internet at a big cost. As smartphones get smarter, it’s easy to argue that we’re getting thicker. That’s not quite true. Our brains are not necessarily shriveling, but they are adapting. Thanks to technology, the need to know has been replaced by the ability to find out. Younger people, especially the digital natives who have never known life without the web, are most comfortable in this new environment.
The plasticity of the brain is the ability to adapt its function according to which neural pathways are most employed. Our brains are changing to meet the demands of this high-octane modern world. Technology is ruining our ability to think and communicate properly.
The brain requires exercise, and we allow it to atrophy at our peril. While we get better at juggling ideas, our memories are taking a battering. The Google effect shows that people tend not to bother remembering something if they believe it can be looked up later. People are more likely to index, to remember where information is located rather than the actual information itself.
Memory and our sense of self are inevitably linked, because personal identity is founded on consciousness. When memories fail in old age, we feel we lose a part of us that rests deep within. That is why Alzheimer’s is such a cruel disease. It may well be that memory is more spiritual than we like to admit. By using our minds, we nourish a part of us that goes beyond the physical. Equally, by storing memory outside of ourselves on a piece of technology, we lose something fundamental.
Marketers plan to double their spending on social media, such as https://venitism.wordpress.com, in the next five years. Half of CMOs believe they are not prepared to manage the challenges of social media. This disparity highlights an important, and potentially costly, problem: Marketers continue to increase social media spending, yet many are still uncertain about management, strategies, and integration.
There really is no one-size-fits-all social media strategy. Stories are an effective marketing and advertising tool. What marketers need is a process that leads to individual solutions. They must use fundamental marketing concepts and modify them for this new two-way, consumer-empowered medium of social media.
Identify your business objectives and target market. Also consider your industry, the recent performance of the brand, and the current traditional marketing promotions for the product and its competitors. A startup or new product needs to generate awareness, while an older product may need to be revived.
Brands cannot talk to everyone in every social channel, so narrowly define whom you want to listen to and communicate with. Start with simple Google searches on your brand name, analytics tools within social networks, and look to secondary research.
Social media, such as https://venitism.wordpress.com, is all about producing fresh, relevant content, so create things that your audience will find valuable, whether it’s how-to articles or simply something entertaining. Where you deliver the content matters too. The best social media plans deliver content that’s optimized to each channel. Engage the target audience on the channels they use with material that is unique to the channel. Select social channels that fit brand message, type of content, and target audience.
Some brands have invested in a social media command center. These branded social media monitoring rooms acting as a central visual hub for social data, to speed up marketing and engagement with customers. Don’t base your marketing strategy on people’s tips about what may work for your business. Having even a basic process in place can help you be more strategic about social media decisions and make your social media spending more effective.