Mr. President,Dear Veterans,Mr. General Secretary,Mr. Ambassador,Your excellencies,Ladies and Gentlemen,Good evening.Last November the world marked the centenary of the end of the First World War. The Great War was supposed to be the war to end all wars. But less than 21 years later, Europe was at war again, and in September this year we will commemorate the 80th anniversary of the outbreak of World War II.While not a battleground in either of the Wars, Cyprus contributed significantly to the allied effort. Yet in many ways, Cyprus’s role in the two wars is an untold story.The greatest contribution of Cyprus to the First World War was in the form of the Cyprus Mule Corp. Between 13,000 and 16,000 volunteer muleteers, Greek and Turkish speaking, served with the British army on the Macedonian Front.In the Second World War, Cypriots fought side by side with forces from across the Commonwealth and the allies. Some 20,000 Cypriots from all communities of the island – Greek, Turkish, Armenian, Maronite and Latin – volunteered with the Armed Forces, while another 10,000 Cypriots living in the UK, Australia and US enlisted for service in those countries. Best known among the Cypriot volunteer forces was the Cyprus Regiment, founded on 12 April 1940, and celebrating its 79th anniversary this Friday. And not only Cypriot men joined the allied cause; Cypriot women participated in the Women’s Auxiliary Territorial Service and Women’s Auxiliary Air Force.Members of the Cyprus Regiment saw service not only in Greece, but also France, Italy the Middle East and North Africa. Some 600 men were killed in action, and are buried in 56 cemetries in 16 countries.Yet for me one of the most inspiring moments as High Commissioner was to attend the Mayor of Nicosia’s Remembrance Day event in November, and to meet and thank the Cyprus WW2 veterans present. It is wonderful to have veterans among us again tonight. Theirs are the living faces of Cyprus in WW2. We owe a great debt to them and the other Cyprus volunteers for their contribution to the allied cause. They fought for the cause of freedom and they were part of the victory over fascism in Europe. Regrettably, the adherents of fascism were to make their presence felt again in Cyprus, with tragic consequences for the people of this island.Strangely these facts are not well known to ordinary Cypriots. References to Cyprus’ contributions in the World Wars in the public schools’ curriculum are limited and certainly not sufficient. I wonder how many Cypriots have visited the memorials to the fallen erected in Nicosia, Paphos and Larnaca by the Cyprus Veterans’ Association of WWII?“1940: Faces & Images” is a positive step towards raising the profile of Cyprus in the World Wars. I welcome the fact that this exhibition includes participation from the Imperial War Museum and that a smaller part of it will travel to the National Army Museum in London, bringing the stories of UK-Cyprus co-operation in the Great Wars to the attention of the broader public in both our countries. These testimonies of historic co-operation form an important part of our bilateral links and the history that binds our nations together.The High Commission is pleased to be working with the Bank of Cyprus Cultural Foundation on some side events to the exhibition in the coming months which will further tell the stories of the Great Wars.Learning about and from the past is important for every nation. Cyprus has a number of impressive stories to tell from this part of its history: of communities working together side by side for the common good; defending freedom; and providing protection to those in need; and industriousness. Such values are as important today as in the past and central to Cyprus’s modern role and vocation as a pillar of stability and European values in the Eastern Mediterranean region.I congratulate the Bank of Cyprus Foundation for its initiative to hold this Exhibition and like all of you, I look forward to learning more from it.Thank you and a very good evening.
A decade after the start of the wars in Afghanistan and Iraq, studies have shown that the incidence of post-traumatic stress disorder (PTSD) among troops is surprisingly low. A Harvard researcher credits the numbers, in part, to efforts by the Army to prevent PTSD, and to ensure that those who develop the disorder receive the best treatment available.In an article in the May 18 issue of Science, Professor of Psychology Richard J. McNally says there is reason for cautious optimism when it comes to the prevalence of PTSD. While early estimates suggested that as many as 30 percent of all troops might develop the condition, current surveys show the actual rates ranging from 2.1 percent to 13.8 percent. The U.S. Millennium survey of U.S. troops found that 4.3 percent of all American military personnel deployed to Iraq and Afghanistan have developed PTSD.“As a society we’re much more aware of these issues than ever before,” McNally said. “That is reflected by the fact that the military and the Department of Veterans Affairs have established programs to ensure soldiers receive the best treatment possible. The title of my article is ‘Are We Winning the War Against Post-Traumatic Stress Disorder?’ I think a provisional answer to that is, ‘Yes, we might be.’”When asked about concerns among some veterans and counselors that PTSD is underreported, McNally said: “Estimates of PTSD are higher when surveys are anonymous than when they are not anonymous. Lack of anonymity is the chief limitation of the otherwise excellent U.S. Millennium survey, which found a rate of 7.6 percent among U.S. combat troops. By comparison, an off-the-record survey by Hoge et al. found that the rate of PTSD was 12.6 percent in combat units. On the other hand, the Hoge et al. study was much smaller, did not have random sampling, and it did not exclude subjects with predeployment PTSD symptoms.”While part of the drop in PTSD may simply be that wars are less lethal — in a decade of war in Iraq, fewer than 5,000 American troops were killed, compared with more than 55,000 over a similar period in Vietnam — McNally thinks that new efforts by the Army to tackle the disorder sooner, and ensure soldiers receive better treatment, may be yielding results.The suggestion that 30 percent of troops might develop PTSD was based on the findings of the National Vietnam Veterans Readjustment Study (NVVRS), completed in 1990, which found that 30.9 percent of Vietnam veterans showed symptoms of the PTSD. While later analyses brought that number down, the findings served to galvanize Army efforts to address the risk of soldiers developing the condition, McNally said.“It’s important to remember that simply being deployed carries a great deal of stress,” McNally said. “Soldiers miss their family, and those who stay at home essentially become a one-parent family. Difficulties with children, or school, or making ends meet — there are all kinds of stressors that have to do with separating families, let alone having one member in a war zone. Fortunately, the military has taken steps to help soldiers cope with these stressors in addition to the traumatic combat stressors that can produce PTSD.”Those steps include the Comprehensive Soldier Fitness (CSF) program and Battlemind training, programs created, respectively, to help soldiers reduce their risk for PTSD before being deployed, and to treat those at risk of developing it after they return.“It’s not therapy per se, but a preventive intervention to help people put their experiences in perspective,” McNally said of the Battlemind training. “For example, it encourages soldiers to use the sort of emotional bonding that happens within units to reconnect with their families, and to see symptoms like hypervigilance not as symptoms of a mental disorder, but as something they need to adjust when they come home. It helps people realize that those things are part of the normal re-adjustment process.”The results of random trials show that, four months after returning home, soldiers who underwent Battlemind training had fewer symptoms of PTSD and depression than did those who underwent the Army’s standard postdeployment program. Such trials have been conducted with CSF, so it remains unclear what impact, if any, it has on the incidence of PTSD.Despite such efforts, PTSD remains a serious issue among veterans and their families. Treatments developed in the past 20 years — including prolonged exposure and cognitive processing therapy — improve the chances of recovery.“These treatments weren’t available to veterans of the Vietnam War – they were only developed in the 1990s – and the evidence shows that the longer you have PTSD, the more likely it is that other problems will accumulate,” said McNally. “The earlier we can get people into treatment, the quicker we can help them get their lives back together.”
By the end of 2017, the client PCs of all 138,000 employees who are part of Dell Technologies companies, plus all the locked and automated PCs that support our business, will be using the latest operating system from Microsoft. We’d like to suggest that all our customers consider making the move sooner rather than later, too.Our Windows 10 migration is a big move for us, just as it will be for any organization our size and even for much smaller enterprises. But our own digital transformation demanded it. Even more, we owe it to our customers that we migrate ourselves before they are forced to do so when Microsoft ends Windows 7 support in 2020. That’s because we want to be able to share our experience and knowledge to help them migrate as smoothly and effectively as possible.Windows 10: Three core enhancements What do you gain by moving your own organization to Windows 10? For starters, you’ll be able to take advantage of three core Windows enhancements: strengthened security, new productivity features, and an update model that can save IT time and effort. More specifically:Security: Windows 10 delivers a much stronger security model built on a foundation of 64-bit Unified Extensible Firmware Interface (UEFI) Secure Boot. This model includes advanced security measures, such as Credential Guard and Device Guard, both of which we are implementing. The first will help us protect against pass-the-hash attacks, and the second will help us eliminate exploits, viruses and malware.Productivity: Many of your employees may already be using Windows 10 at home, just as ours are, and they are familiar with its many user enhancements, such as the new Start menu, an improved Windows Explorer and Cortana. This familiarity means that workplace adoption will be easier. Additional user-focused enhancements like Continuum and Windows Hello make the new OS more attractive to many more worker profiles across a typical large organization.Currency: Windows 10 introduces the Windows-as-a-service (WaaS) delivery model, which provides the latest features and functionality via monthly updates and semi-annual upgrades, enabling IT to plan better. Windows 10 remote and self-install functions make it much faster and more efficient to deploy. This not only improves the user experience, but also can cut IT’s time and expense by reducing or eliminating desk visits and having to physically engage user devices.Key benefits that can kick your digital transformation into high gear With these enhancements, Windows 10 can help you accelerate your organization’s digital transformation into one that’s eve faster, more efficient and more responsive. And Windows 10 has three ways to help you and your IT team members in this journey.First, it’s business-ready, with the WaaS model that enables enterprises to validate and test applications, update security, and add new features and upgrades more often.Second, Windows 10 is always current. By making updates (i.e., patches) cumulative and an all-or-nothing proposition, Microsoft standardizes the OS base of its customers to a common configuration. This helps ensure business continuity while also support faster innovation in business applications.Third, Windows 10 provides major upgrades twice a year, so enterprises can count on the number of Current Branch for Business (CBB) configurations at any one time to be just two — current and upcoming. This reduces triage and troubleshooting for IT, while boosting security.A sensible approach: What worked for usAt Dell EMC, we took a three-phased approach that we suggest other organizations adopt: prepare your infrastructure, application validation and testing, and migrate your users and client base in steps.Phase 1: Prepare infrastructure. We evaluated our infrastructure as a whole and assessed our group strategies to streamline policy creation and our testing processes. We’re using the Microsoft Deployment Toolkit with Windows Server Update Services to create our reference images. Also, we’ve followed Microsoft guidelines for Configuration Manager versions in support of Windows as a service, beginning with System Center Configuration Manager Build 1511. More info.Phase 2: Application validation and testing. With its cumulative “always current” updates and tight timeline between releases, Dell EMC chose to use Windows 10 Current Branch for Business as the edition for most of our application deployment scenarios. Of course, your situation may be different, so consider the two other Windows 10 editions, Current Branch and Long-Term Servicing Branch, to determine what’s best for you. More info.Phase 3: Migrate users/clients. In this phase, we’ve taken deployment approach for Windows 10 along three different paths to standardization: new hires and refreshes, wipe and reloads, and upgrades. The first is the easiest. The second involves anyone with technical issues. The third is the most challenging, with the far greatest numbers of users. But using Configuration Manager, we continually review and level-up clients with background updates, so they are ready to upgrade to Windows 10. More info.Getting StartedIf you need help with your organization’s Windows 10 migration, we invite you to learn more about how Dell EMC can provide Windows 10 migration assistance. Also, check out our CIO Scott Pittman’s blog for much greater detail on Dell EMC’s Windows 10 migration than we can provide here.Lastly, we’ve also have permission to share an extremely valuable Gartner report, Optimize Your Cost to Migrate to Windows 10 Using Gartner’s Cost Model. It explains the key determinants to Windows 10 migration costs that you should be aware of, as well as some recommendations to consider.
As the leader of Dell EMC’s Server & Infrastructure Systems CTO team, I’m constantly drawn to the future. While many of our 2018 Server Trends and Observations came to fruition, and some are still ongoing, our technical leadership team has collaborated to bring you the top 10 trends and observations that will most greatly affect server technologies and adoption for 2019.As the global leader in server technology, Dell EMC has attracted some of the brightest minds in the industry. Sharing a small glimpse into our mind trust – with deep roots in listening to our customers and leaders around the industry –each of these ten trends is authored by one of our Senior Fellows, Fellows, or Senior Distinguished Engineers.#1 – IT must be the enabler of the Transformational Journey Robert W Hormuth – CTO & VP, Server Infrastructure SolutionsFrom a broader technology point of view, we are clearly in a data-driven digital ecosystem era, and you can read more about a wider set of 2019 technology industry predictions from Jeff Clarke, vice chairman of Products and Operations here at Dell Technologies. Businesses must embark on a challenging journey to enable multiple transformations: Digital, IT, Workforce and Security.When it comes to servers, we see them as the bedrock of the modern datacenter. Transformations are bringing an incredible value to businesses and organizations of all types, making them more nimble, intelligent, competitive and adaptive. We are in the midst of a 50-year perfect storm on both technology and business fronts. Businesses must transform and embrace the digital world, or get run over by a new, more agile competitor with a new business model benefiting from advanced technologies like data analytics, AI, ML and DL. No business is safe from the wave of digital disruption.Options for mining data are opening new opportunities that are making businesses smarter by bringing customers and businesses closer together. Companies must move fast, pick the right tool for the job, and focus on being the disruptor to avoid becoming the disrupted. Leading is easier from the front.#2 – The Edge is RealTy Schmitt – Fellow & VP, Extreme Scale InfrastructureMark Bailey – Senior Distinguished Engineer, Extreme Scale InfrastructureAlan Brumley – Senior Distinguished Engineer, Server Infrastructure Solutions, OEM EngineeringThe expectations of IT hardware, software and datacenter infrastructure will continue to evolve in 2019. Large volumes of data ingest will require near- or real-time processing and will proliferate the concept and use cases of edge computing.The definition of edge computing remains fluid. One perspective defines “the edge” as a vehicle to enable data aggregation and migration to the cloud. In this case, the data streams ebb and flow upwards from the location of creation/collection and finally reside in the cloud. IT hardware use cases are emerging to support this vantage point, requiring smaller form factors and more ruggedized solutions. Non-traditional deployment and maintainability environments will foster a new balance between critical hardware design considerations of power, thermals and connectivity.An alternative perspective of “the edge” defines it as a means by which traditional cloud architectures of compute and storage are deployed closer to the consumers and creators of data, alleviating the burden within the cloud and the associated mechanisms of transport. The resulting geo-disbursement of compute and storage allows for new usage models that previously were not possible. Data collected can be analyzed locally in real-time with only the resultant data being sent to the cloud.Each perspective on “the edge” is a reflection of your usage model, and you will ultimately define what challenge or new capability the edge represents to them. 2019 will usher in edge proof of concepts (POCs) as customers, edge hosting companies, real-estate owners, equipment manufacturers and IT innovators test business models and actively refine the new capabilities the edge affords them. Among them will be traditional collocation providers, new startups and large global infrastructure companies as they all seek to gain insight into what the edge solutions the industry will ultimately converge upon. New IT hardware, software and datacenter infrastructure form factors will be designed and trialed, allowing customers to test their solutions with small upfront capital expense. Small, self-contained micro datacenters will be deployed in efforts to enable traditional IT to be easily placed and operated closer to the data ingest or supply points.Edge deployments will ultimately result in multi-tenant environments as initial private edge installations shift to allow for public workloads to cohabitate within the same environment. This will have a positive impact as multiple companies will require edge presence across a given geographic region, but their business models will not support the cost and complexity of a private installation. These hybrid edge deployments will allow heterogeneous solutions to work together to deliver better performance and satisfaction, while minimizing the burden on the upstream infrastructure.The edge evolution will provide vast potential for how customers and providers use, analyze and distribute data. The creation of POCs in 2019 will allow all parties to vet and test new technologies and associated cost models. These findings will set the foundation for edge infrastructure and solutions going forward.#3 – The Journey to Kinetic Infrastructure continuesBill Dawkins – Fellow & VP, Server Infrastructure Solutions office of CTOThe terms “composable infrastructure” and “server disaggregation” entered the mindsets of many enterprise IT departments in 2018, as the industry made initial strides in developing the technologies that will make a fully composable infrastructure possible. Dell EMC took a major step in our kinetic infrastructure journey with the availability of our new modular platform—the PowerEdge MX. The MX platform allows for the composition of compute and storage resources through its built-in SAS infrastructure. It is designed to incorporate new interconnect technologies as they become available. These are technologies that will enable full composability of all resources including resources with sub-microsecond latency requirements like Storage Class Memory (SCM). The MX’s unique “no mid-plane” design allows us to contemplate the incorporation of high-speed interconnect technologies like Gen-Z (check out our August 2018 blog for more). It is the natural platform for exploring the expansion of composability beyond storage in 2019.Another key element in the kinetic infrastructure journey is the continued development of Gen-Z. To realize the vision of a fully composable infrastructure, a rack-scale, high-speed, true fabric is required to allow the composition of all components including SCM, NVMe flash devices, GPU and FPGAs. Gen-Z is the industry’s choice for this fabric. With over sixty Gen-Z Consortium members, 2019 will see more technology development and demonstrations.While Gen-Z is critical for realizing a system where all components are composable, 2019 will also see the rise of technologies that allow the composition of certain classes of components. For example, NVMe over Fabrics will enable pools of NVMe SSDs to be dynamically assigned to servers within a data center while still maintaining low enough latencies to retain the performance benefit of these devices. 2019 will be a year of acceleration on the Kinetic Infrastructure journey.#4 – Data Science business disruption leads need for AI/ML/DLJimmy Pike – Senior Fellow & SVP, Server Infrastructure Solutions office of CTOOnur Celebioglu – Senior Distinguished Engineer, Server Infrastructure Solutions, HPCBhyrav Mutnury – Senior Distinguished Engineer, Server Infrastructure Solution, Advanced EngineeringIn 2019, the boundaries of Information technology will be stretched to their limit as the “data creation” IT economy transitions to one of “data consumption.” Thus, as the volume and variety of data an organization needs to analyze grows, there will be an increasing need to utilize data enriched techniques like artificial intelligence/machine learning/deep learning (AI/ML/DL) to help transform this data into information.The industry is in the midst of an undeniable deluge of data, which traditionally has originated within IT (i.e. traditional enterprise application and data centers), will begin to come from an ever-increasing number of sources that are extraordinarily diverse. Thus In 2019 the growth in demand for:People with expertise in applying these techniques to solve business problems;Advances and standardization in AI/ML/DL tools, methodologies and algorithms; and,Compute/storage/network infrastructure to run these workloads will be nothing short of amazing.We already have seen the adoption of accelerators such as GPUs and FPGAs to handle this increasing demand for compute, and, this year, we will see more specialized software solutions as well as “purpose-built” ASICs that accelerate AI workloads. While providing more choice, this will make it more difficult for companies to pick which technology they need to invest in for sustained success.In general, the undeniable effect of HPC (High-Performance Computing) will continue to impact the mainstream and stretch the performance limits of those seen in traditional batch-oriented scientific computing as well as enterprise solutions. The transition to a data consumption IT economy will create a greater focus on HTPC (High Through-Put Computing). As noted the limits of traditional deterministic computing will mandate a blend of both deterministic and probabilistic computing techniques (such as machine and deep learning). This addition to the IT tool chest will be used to help recognize and avoid circumstances where the application of computational resources to problems where “close” is good enough can occur (i.e.ML) and our traditional deterministic computing techniques can then be focused in areas where the return on their use is maximized.In 2019, the continued growth of data (especially at the edge) will see the rise of ML-I (machine learning inferencing) as the first layer of data ‘pre-screening’ at its source. While the press associated with terms like hybrid cloud, AI, ML and edge computing will continue, the concepts by themselves will become increasingly less important as real solution providers seek to do the right thing at the right place, regardless of what it is called.We believe 2019 will be the year of the ASIC for both training and inferencing. There will be a host of new solutions that burst on to the scene and, as quickly as they come, many will disappear. Many have realized the vastly larger market opportunity for inferencing as compared to the equally important model training activities enjoyed by GPGU providers. Several intend to take market share including companies like Graphcore with their accelerator for both training and inferencing, AMD with both their CPUs and ATI GPUs, and Intel with their CPUs and Nervana ML coprocessor. Fortunately, virtually all of the data science work takes place above several popular frameworks like Tensorflow or PyTorch. This allows providers to focus on the best underlying resources to these frameworks. Perhaps most importantly, we are already starting to see the beginnings of model transport and standardization where fully trained models can be created in one environment and fully executed in a completely different one. In 2019, more advances are expected to happen in terms of model standardization.The next big challenge will be model validation and the removal of hidden bias in training sets, and ultimately Trust… i.e. how trust is described, measured, verified and, finally, how the industry will deal with indemnification. We already have seen the huge impact that ML has had on voice and image recognition and as a variety of “recommenders.” For most of 2019, we can expect these applications to continue in a “human-assisted decision support” role where there are limited consequences of incorrect conclusions.#5 – The move from Data Creation Era to Data Consumption Era is leading to a silicon renaissanceStuart Berke – Fellow & VP, Server Infrastructure Solutions, Advanced EngineeringJoe Vivio – Fellow & VP, Server Infrastructure Solutions, Extreme Scale InfrastructureGary Kotzur – Senior Distinguished Engineer, Server Infrastructure Solutions, Advanced EngineeringWhile general-purpose processor CPU advances continue to provide steady annual performance gains, the Data Consumption Era requires unprecedented computational and real-time demands that can only be met with innovative processing and system architectures. Currently, application and domain specific accelerators and offload engines are being deployed within CPUs, on Add-In Cards and IO & Storage Controllers, and in dedicated hardware nodes to deliver the necessary performance while optimizing overall cost and power.Within traditional CPUs and System-on-Chips (SOCs), Instruction Set Architectures (ISA) are being extended to include optimized vector and matrix integer and floating-point processing, pattern searching, and other functions. Latest 10nm and below chip processes provide ample transistors to allow inclusion of numerous dedicated silicon offload engines that provide orders of magnitude performance improvement for functions such as encryption, compression, security, pattern matching and many others. And advances in multi-chip packaging and die stacking allow integration of multiple processors, memories such as High Bandwidth Memory (HBM) and other functions to efficiently process many operations entirely without going “off-chip.”IO and storage controllers similarly are incorporating a broad set of dedicated silicon engines and embedded or local memories to dramatically reduce the load on the CPU. Smart NICs are evolving to include multiple microcontrollers, integrated FPGAs and deep packet inspection processors. And general purpose GPUs are scaling up to tightly interconnect eight or more modules each with terabytes per second of memory bandwidth and teraflops worth of processing power to address emerging edge, AI, machine learning and other workloads that cannot be met with traditional CPUs.These innovative architectures are incorporating emerging Storage Class Memory (SCM) within the memory and storage hierarchies to handle orders of magnitude greater data capacities at significantly lower and more deterministic latencies. Examples of SCM expected in the next few years include 3D Crosspoint, Phase Change Memory (PCM), Magnetic RAM (MRAM), and Carbon Nanotube (NRAM). Processor local SCM will support terabytes of operational data and persistence in lieu of power loss, and storage systems will capitalize on SCM as primary storage, or optimized caching or tiering layers.Finally, as traditional captive fabrication advantages are ending, as open manufacturing suppliers such as TSMC provide leading silicon process technology to all, innovation is accelerating across a wide variety of established and startup companies. A true Silicon Renaissance is underway to ensure that the computing demands of today and tomorrow continue to be realized at suitable cost, power and physical packaging.#6 – Data: It’s mine and I want it back. On-prem repatriation is happeningStephen Rousset – Senior Distinguished Engineer, Server Infrastructure Solutions office of CTOAs the cloud model continues to mature, companies are recognizing the challenges with a single cloud instance around public cloud and starting to repatriate data and workloads back to on-premises. While the rise of the public cloud highlighted some benefits to companies, there are challenges around loss of operational control, performance issues, security compliance and cloud/cost sprawl. With the growth of enterprise and mobile edge, a hybrid cloud model has quickly emerged as a much more appropriate solution for a majority of businesses. This data/workload placement transition, known as cloud repatriation, is seen in studies such as one from IDC (Businesses Moving From Public Cloud Due To Security, Says IDC Survey, Michael Dell: It’s Prime Time For Public Cloud Repatriation) that finds 80% of companies are expected to move 50% of their workloads from the public cloud to private or on-prem locations over the next two years.One key driver to the listed reasons for cloud repatriation is the velocity and volumes of data generation, and with it comes the cost, control and containment of data. With the astronomical growth of data generated over the last two years, when a company needs to retrieve its data, even with public cloud companies lowering their data storage pricing in those two years, the real cost of data retrieval and access continues to increase as the data generation CAGR outpaces price reductions. This leads to what may be considered a philosophical discussion, but, with data all but locked in the public cloud due to the cost of export, there is an overriding conversation of who actually “owns” the data with some companies feeling like they are having to rent their own data versus having clear ownership of it. This concept creates a data gravity in the public cloud that is costing companies a tremendous amount of unexpected costs and accelerating the decision to take back control of their data and the workloads that use that data. Dell EMC works with these customers to provide a breadth of infrastructure solutions to give them an optimized offering of data placement, protection and analytics to maintain true data ownership.#7 – Blockchain can benefit EnterpriseGaurav Chawla – Fellow & VP, Server Infrastructure Solutions office of CTOEnterprises are always seeking ways and means to make their systems more secure and transparent. Blockchain could provide that underlying technology to build such solutions. The origin of blockchain dates back to Oct 2008, when first white paper was published for a “peer-to-peer electronic cash system” and gave birth to the Bitcoin digital currency.The last decade leading up to 2018 saw a lot of hype and activity in the area of crypto-currencies and ICOs (Initial Coin Offerings). Like other early stage high impact technologies (e.g. AI/ML/DL, Edge Computing/IOT, 5G), we have seen both perspectives where in some technology enthusiasts see blockchain as the holy grail of decentralized identities, decentralized trusted compute and next generation Internet 2.0, while others have skepticism about blockchain just being a distributed database.In 2019, we will see it pivot to an increased activity in the area of permissioned blockchains and its ability to address enterprise workflows and use cases. In essence, this is about applying distributed ledger, which is the underlying technology for blockchain, to enterprise workflows. We will see it move into real PoCs to deliver on this promise of distributed ledger (DLT).Some of the initial use cases may focus on improved implementations for audit/compliance in enterprise use cases or enable secured sharing of information across multiple parties (competitors, vendors/suppliers and customers). These implementations will drive an increased industry collaboration on blockchain-based architectures and give rise to consortiums focused on specific industry verticals: finance, logistics/supply-chain, healthcare and retail just to name a few. These projects will drive DLT integration in brownfield deployments and will use a combination of off-chain and on-chain processing.Most of the data will be stored off-chain on existing compute/storage, and information will be brought on-chain where blockchain properties of immutability, security/encryption and distributed database provides benefits. Smart contracts will play a key role, and multi-blockchain architectures will start to evolve. We also will see increased momentum of DLT for integrations with emerging use cases in IoT and AI/ML/DL. To be successful, implementations will need to pay close attention to real benefits of blockchain and integration aspects.At Dell Technologies, we support both VMware Blockchain, based on open source Project Concord, and other open source blockchain implementations. We look forward to taking these blockchain projects to the next level of implementations and consortium engagements.#8 – Security threats are the new exponentialMukund Khatri – Fellow & VP, Server Infrastructure Solutions, Advanced EngineeringIt can be hard for one to fathom what “worse” could mean after the barrage of high impact vulnerabilities and breaches we experienced last year. 2019 will see yet another year of the exponential growth in security threats and events, led by combination of broadened bug bounty programs, increasing design complexity and well-funded, sophisticated attackers.Staying current with timely patch management will be more critical than ever for enterprises. There will be broader recognition of the critical need for cyber resiliency in server architectures, as currently available in Dell PowerEdge, that provide system-wide protection, integrity verification and automated remediation. While impregnable design is a myth, effective roots of trust and trustworthy boot flows will be needed for the compute, management and fabric domains for modern infrastructure. There will be enhancements to monitoring and remediation technologies that must evolve using AI and ML to enhance the security of their systems.In 2019, supply chain concerns will be top of mind for all IT purchases. As seen recently, breach in supply chain can be extremely difficult to detect, inclusive of hardware and software, and implications can be catastrophic for all involved. One of the key objectives this year will be holding a successful intrusion harmless. In other words, if someone can get into the platform, making sure they cannot obtain meaningful information or do damage. This will drive innovations delivering more intense trust strategy based on enhanced identity management.Identity at all levels (user, device, and platform) will be a great focus and require a complete end-to-end trust chain for any agent that is able to install executables on the platform and policy tools for ensuring trust. This will likely include options based on blockchain.Greater focus on encryption will emerge, requiring any data at rest to be encrypted, whether at the edge or in the datacenter, along with robust encryption key management. Secure enclaves for better protection of secrets is another emerging solution space that will see more focus. Regulations to protect customer data, similar to EU’s General Data Protection Regulation (GDPR), California’s Consumer Privacy Act (CCPA) and Australia’s encryption law, also can be expected to increase thereby driving compliance costs and forcing tradeoffs.And, finally, newer technologies like Storage Class Memory (SCM), Field Programmable Gate Arrays (FPGA), Smart NICs, while all critical for digital transformation, will bring their own set of unique security challenges. For 2019 and the foreseeable future, the exponential trajectory for security threats is here to stay.#9 – Is OpenSource “the gift that keeps giving?”Stephen Rousset – Senior Distinguished Engineer, Server Infrastructure Solutions office of CTOShawn Dube – Senior Distinguished Engineer, Server Infrastructure Solutions, Advanced EngineeringThe adoption and proliferation of open source software (OSS) has created communities of creativity and provided knowledge leverage across many disparate fields to provide a vast selection of offerings in the IT ecosystem. This continued broadening of open source choices and companies’ unyielding desires to reduce expenses has accentuated the appeal of “free” CapEx with open source at the C-Suite.But the realization for companies around open source is showing that the “free” of open source is not free as in beer, but free as in a free puppy. A free beer is quite enjoyable on a hot Texas summer day, and although a free puppy can also bring a different type of enjoyment, it does require a significantly more amount of attention, care and ongoing expense to keep it healthy and out of trouble. Not a lot of planning needs to go in to consuming a free beer, but taking on a free puppy does require real planning around time and money.Dell EMC has always supported OSS and remains very bullish on the open source community, but Ready Solutions that are built, delivered and working are resonating more than a DIY model. While open source can initially look very appealing, an open source DIY model requires retaining the right (often hard to find) skillsets in your company, the diligence in selecting the right parts and pieces to integrate together, and, of course, the continued maintenance of all those integrated pieces. We have seen numerous customers having to reset their strict DIY model and look to alternative ways to achieve the high-level business objective. Dell EMC recognizes the desire for customer choice and has put together a portfolio of options ranging from a fully supported Ready Solution of open source packages to address customer workloads to highly optimized engineered solutions leveraging open source or partnered packages.#10 – Telemetry will bring new levels of intelligence to ITElie Jreij – Fellow & VP, Server Infrastructure Solutions, Advanced EngineeringJon Hass – Senior Distinguished Engineer, Server Infrastructure Solutions office of CTOOptimizing and making IT operations more efficient is a goal every enterprise shares. One of the means to accomplish this goal is to muster more telemetry data on hardware and software infrastructures for use by management applications and analytics. As the need for acquiring telemetry data increases, collection methods need to be improved and standardized. This has been recognized by Dell EMC and the DMTF standards organization, which recently released the Redfish Telemetry schema and Streaming & Eventing specifications. These specifications simplify and make the data collection task more consistent across infrastructure components and enable data analytics applications to focus on the data content without having to deal with multiple collection methods and formats.IT infrastructure components have a variety of interface mechanisms and protocols that vary widely between devices (ex. Modbus, I2C, PWM, PECI, APML…). A local management controller or instrumentation can collect telemetry data using device and component specific protocols and then stream the data in standardized formats to remote telemetry clients and analytics applications. Examples of local manageability controllers include IOT gateways, Service Processor or Baseboard Management Controllers in IT equipment, or other controllers inside or outside the data center. Factors to consider when planning telemetry utilization include bandwidth, security, consistency, latency and accuracy.While there has been a lot of focus on application specific telemetry, such as face recognition or customer shopping patterns, expect a new focus on IT infrastructure telemetry. This will allow smarter management of the compute, storage, networking and related software infrastructures. Streaming consistent and standardized telemetry information about the infrastructures will enable analytics applications to optimize manageability and deliver automation such as predictive failure and network intrusion detection, and run the infrastructure more effectively. These features become more important as IT infrastructure characteristics evolve, and aspects like energy efficiency, edge deployment and reduced total cost of ownership continue to be prioritized.
TAIPEI, Taiwan (AP) — Chinese police have arrested more than 80 suspected members of a criminal group that was manufacturing and selling fake COVID-19 vaccines, including to other countries. State media say police in Beijing and in Jiangsu and Shandong provinces broke up the group led by a suspect surnamed Kong that was producing the fake vaccines, which consisted of a simple saline solution. The vaccines were sold in China and to other countries, although it was unclear which ones. State media say the group had been active since last September.
Vermont Wind LLC, which is developing a 40-megawatt, 16-turbine project near I-91 on Granby Mountain in Sheffield, has received its final permits from the Vermont Environmental Court. The Vermont Supreme Court ruled in favor of the project in February 2009, upholding the Vermont Public Service Board’s issuing of a certificate of public good for the project. Some abutters and even Governor Douglas have opposed the project. The company will pay Sheffield $520,000 a year.First Wind, based in Boston, on behalf of its subsidiary, Vermont Wind, has issued the following statement regarding the August 30, 2010, ruling by the Vermont Environmental Court on permits related to the company’s proposed Sheffield Wind Project:‘We are pleased with today’s ruling by the Court. First Wind plans to construct the project in accordance with the Vermont Public Service Board’s conditions as detailed in the Certificate of Public Good granted for the project.‘First Wind is working toward meeting Vermont’s immediate energy needs with renewable and affordable electricity. In addition, the company is committed to bringing economic opportunities via the development, construction and operation of this project to the Northeast Kingdom, through tax revenues, jobs and delivering clean, renewable energy made in Vermont for Vermont residents.’Source: First Wind. 8.30.2010About First WindFirst Wind is an independent wind energy company exclusively focused on the development, financing, construction, ownership and operation of utility-scale wind projects in the United States. Based in Boston, First Wind has wind projects in the Northeast, the West and in Hawaii, with the capacity to generate up to 504 megawatts of power. For more information on First Wind, www.firstwind.com(link is external)
GlobalFoundries,IBM (NYSE: IBM) today announced innovative new chip-making technology for power-management semiconductors ‘ the company’s first foray into a segment seen as critical to the development of alternative energy sources, smart buildings and new consumer devices. The new chips for this “Smart Planet” infrastructure will be manufactured at IBM’s plant in Essex Junction starting in the first half of 2011.An IBM spokesman said there will be no specific change in IBM employment in Vermont from this announcement. However, he said the plant has been adding manufacturing employees to handle strong customer demand. IBM’s process integrates wireless communications into a single power-management chip, a first that can cut production costs (about 20 percent) to allow chip designers and manufacturers to create a new class of semiconductors ‘ ultra-small and affordable chips that control power usage while they communicate in real-time with systems used to monitor ‘smart’ buildings, energy grids and transportation systems.The main function of power-management chips is to optimize power usage and serve as bridges so electricity can flow uninterrupted among systems and electronics that require varying levels of current. They are key components used in solar panels, for example, and widely used in all industrial segments ‘ automobiles, consumer electronics (digital televisions) and mobile communications (mobile phones).By using the same chip-making process employed in computers and smart phones, CMOS-7HV can lower the costs of producing these chips while at the same time allowing for the integration of an unprecedented number of new functions ‘ resulting in one chip where previously three or four were needed. Such advancements are critical to the rollout of smart systems where the ubiquity of cheap, single-chip sensors depends on affordable manufacturing technology.The market for power-management (PM) semiconductors is about $31 billion in 2010, up a sizable 40 percent from 2009 and on track to double by 2014, according to iSuppli.(1) Fresh demand from alternative energy and consumer electronics manufacturers is driving much of the demand, although any device with a power supply, battery or power cord uses a power-management chip. Devices with power supplies are especially heavy users of electricity, drawing more than half of the global $30 billion power market through their wiring annually ‘ 60% of it wasted, leaked, costing consumers $10 billion a year.(2) New semiconductor manufacturing technologies such as CMOS-7HV can help electronics makers sew up power leakages that will enable the use of smaller more powerful batteries.Wispry, Irvine Calif., is a leading provider of wireless chips found in such devices as smart phones. “IBM’s process pushes us closer to the holy grail of wireless — connect any where, at any time,” said Jeff Hilbert, president and co-founder. ‘By enabling more efficient power management in smart phones, IBM’s technology opens up the possibility of using smaller, lighter batteries or needing less recharge time to provide the same amount of ‘talk’ time, video sharing or picture-snapping.”Breakthrough: integration that helps commercialize new energyCMOS-7HV offers PM chip manufacturers the potential to speed the rollout of new classes of products and infrastructure. For example:Alternative Energy ‘ IBM’s wireless PM technology can be used to create advanced power-optimizing chips located on individual solar panels to optimize electrical output of an entire array — harvesting up to 57% of the power that is typically lost to real-world conditions such as dirty panels.(2)Smart Buildings ‘ Since buildings worldwide consume more than 40% of all the energy we use — more than any other product or asset ‘ the drive is on to retrofit them with new energy-monitoring technology that makes wide use of PM chips. With IBM technology these PM chips can get smaller and cheaper and can do away with costly and intrusive wiring, making energy-efficient retrofits an easier task for the average building owner, who can see up to a 50% improvement in efficiency. (3) Semiconductor technology such as CMOS-7HV will be key enablers of future ‘Zero Net’ buildings ‘ structures that operate pollution-free.”This new process can be used to create new types of affordable wireless sensors, the kind needed to monitor and connect the smart systems coming on line in the next few years — from alternative-energy products being developed by industrial firms to consumer companies looking to deliver mobile entertainment,” said Michael J. Cadigan, general manager, IBM Microelectronics Division. “Integrating communications and power sensors on one chip cuts costs for the industry and is an example of our ‘smart-planet’ technology vision — one that we back up with R&D.”IBM is rolling out the new chip-making process to manufacturers in the consumer electronics, industrial, automotive, digital media and alternative-energy segments. The company’s semiconductor plant in Burlington Vt., will be the primary manufacturing location for the new technology. IBM is already accepting designs from customers and is scheduling full production for the first half of 2011.IBM CMOS-7HV joins a growing list of state-of-the-art chip-making processes and services that IBM Microelectronic’s foundry business offers to semiconductor and electronics manufacturers worldwide. IBM’s Specialty Foundry technologies focus on improving the functionality and wireless connectivity of a wide range of consumer devices from smart phones to WiFi/WiMax-enabled notebooks, GPS devices, Zigbee devices and mobile TV. IBM Microelectronics Division offers a rich portfolio of custom solutions to accelerate clients’ time-to-market needs, including world renowned silicon germanium technology, RFCMOS, power management technology, high-performance SOI, embedded DRAM, industry standard bulk CMOS and custom processors.Technology Highlights of IBM CMOS-7HV180nm lithographyTriple-gate oxide High Voltage CMOS technology including high-voltage FETS from 20 to 50V extendable to 120VShallow-trench isolation150K circuit/mm2RF features:– Precision poly, diffusion and well resistors– MIM capacitors, vertical natural capacitors for high voltage use– Varactors– HV Schottky Barrier Diode– Inductors Three to seven levels of Al including thick last metalOne-time programmable (OTP) MemoryWire-bond or solder-bump terminalsSource: IBM. ARMONK, N.Y. – 16 Sep 2010. Vermont Business Magazine.
US Senators Patrick Leahy (D-VT) and Lindsey Graham (R-SC) Monday announced they now have 61 Senate cosponsors of their latest ‘Guard Empowerment’ effort, the National Guard Empowerment and State-National Defense Integration Act of 2011 (S.1025), which Leahy and Graham introduced in May. The legislation ‘ known as Guard Empowerment II ‘ builds on reforms made in 2008 by giving the Guard and Reserve a seat at the Pentagon’s budget and policymaking tables and updating jurisdictional and operational lines of authority. Among other changes, the bill would make the Chief of the National Guard Bureau a permanent member of the Joint Chiefs of Staff; reestablish the position of the Vice Chief of the Guard Bureau at the three-star level; enhance the Guard’s representation at the senior levels of U.S. Northern Command; and help clarify the disaster response command relationship among the Guard and the U.S. military commands. The legislation is endorsed by the American Legion, the Veterans of Foreign Wars, the National Governors Association, the National Guard Association of the United States, the Adjutants General Association of the United States, and the Enlisted Association of the National Guard of the United States. Leahy said, ‘Today, at home and abroad, we are asking the Guard to take on more responsibilities than ever. The Guard has grown to become a front-line, 21st Century force, but it is trapped in a 20th Century Pentagon bureaucracy. We need to clear away those cobwebs and give the Guard a voice in the Pentagon that is commensurate to the scale of its missions here and overseas.’ Graham said, ‘Guardsmen and Reservists are citizen-soldiers. During the War on Terror, they have been called up to duty, taken away from their work and families, and sent to far-away lands for long tours in protection of our nation. We need to ensure the Guard and Reserves have a seat at the table when the important decisions affecting our national security are made.’ Leahy and Graham co-chair the Senate National Guard Caucus, the Senate’s largest caucus and one of its most active on matters of defense and national security. Having reached the critical 60-vote threshold, both senators expect that Empowerment II will receive consideration as an amendment to the Fiscal Year 2012 National Defense Authorization Act. In addition to Leahy and Graham, the senators who have cosponsored the bill so far are Senators Daniel Akaka (D-Hawaii), Lamar Alexander (R-Tenn.), Kelly Ayotte (R-N.H.), Max Baucus (D-Mont.), Mark Begich (D-Alaska), Michael Bennet (D-Colo.), Jeff Bingaman (D-N.M.), Richard Blumenthal (D-Conn.), Roy Blunt (R-Mo.), John Boozman (R-Ark.), Barbara Boxer (D-Calif.), Scott Brown (R-Mass.), Sherrod Brown (D-Ohio), Richard Burr (R-N.C.), Maria Cantwell (D-Wash.), Benjamin Cardin (D-Md.), Thomas Carper (D-Del.), Robert Casey (D-Pa.), Daniel Coats (R-Ind.), Kent Conrad (D-N.D.), Christopher Coons (D-Del.), Bob Corker (R-Tenn.), Richard Durbin (D-Ill.), Michael Enzi (R-Wyo.), Dianne Feinstein (D-Calif.), Al Franken (D-Minn.), Kirsten Gillibrand (D-N.Y.), Chuck Grassley (R-Iowa), Kay Hagan (D-N.C.), Tom Harkin (D-Iowa), Dean Heller (R-Nev.), John Hoeven (R-N.D.), Tim Johnson (D-S.D.), Amy Klobuchar (D-Minn.), Mary Landrieu (D-La.), Frank Lautenberg (D-N.J.), Mike Lee (R-Utah), Richard Lugar (R-Ind.), Joe Manchin (D-W.V.), Claire McCaskill (D-Mo.), Robert Menendez (D-N.J.), Jeff Merkley (D-Ore.), Jerry Moran (R-Kan.), Patty Murray (D-Wash.), Mark Pryor (D-Ark.), James Risch (R-Idaho), Pat Roberts (R-Kan.), John D. Rockefeller IV (D-W.V.), Bernard Sanders (I-Vt.), Charles Schumer (D-N.Y.)*, Jeanne Shaheen (D-N.H.), Olympia Snowe (R-Maine), Debbie Stabenow (D-Mich.), Jon Tester (D-Mont.), Mark Udall (D-Colo.)*, David Vitter (R-La.), Mark Warner (D-Va.), Sheldon Whitehouse (D-R.I.), and Ron Wyden (D-Ore.).Source: LEahy’s office. *These senators have been added as cosponsors of S.1025 as of October 3, 2011, but will not appear in the Congressional Record until October 4, 2011.
I made $5.00 an hour when I was 9. Every Friday afternoon my mom would take me in to work with her to clean my uncle’s office building. I vividly remember “cleaning” meant scuffing the walls with the vacuum and collecting a Santa-like bag of trash from all the offices. If I did a satisfactory job and it was on the right Friday (aka payday), my mom would hand me my arguably well-deserved check from my gracious uncle. After our strenuous workday, my mom and I would head over to the bank right down the road. It took a couple years of maturing and a few jobs later before I even noticed that the bank I went to wasn’t actually called a bank but rather a credit union. It performed the same services as a bank, so I thought nothing of it; must be a fancy bank or something.More than a fancy bankMy mom clearly understood the importance of a credit union by financing our money there and I wish I could say the same. I wish I could say I fell into the credit union industry because I supported their mission and purpose. Too much time elapsed before I even started to comprehend the many reasons you should clearly be part of a credit union, but instead by happenstance, I landed a summer internship my junior year of college at CUNA Mutual Group. It wasn’t until that year (21 years old) that I finally started to understand why my mom made that decision to join a credit union such a long time ago. At CUNA Mutual Group, I was on the e-commerce team where I helped various ongoing website projects including boosting SEO efforts and researching content strategy. The summer flew by but I grasped a totally new understanding of the cooperative model and credit union mission from learning workshops and co-workers. I still didn’t fully understand the industry in its entirety since it was only a summer and I was only exposed to one small facet of the organization that seeks to do so much to help. continue reading » 27SHARESShareShareSharePrintMailGooglePinterestDiggRedditStumbleuponDeliciousBufferTumblr
ShareShareSharePrintMailGooglePinterestDiggRedditStumbleuponDeliciousBufferTumblr Fraud exposures continue to plague our industry, many of which are tied to the expanded use of remote transaction channels. And with COVID-19, these crimes have only gotten worse.What’s the deal with HELOC Fraud? HELOC fraud crimes pose a potentially expensive fraud risk to credit unions, sometimes resulting in as much as six figure losses. Fraud criminals are aware of the fact that these open-ended loans can offer a lot of financial gain. They also know these loans are often not monitored as closely as an auto loan, mortgage loan, or credit card loan, and are therefore easier targets for an undiscovered attack.HELOC attacks performed via remote channels have ramped up during COVID-19. Credit unions will want to take extra steps to review their controls surrounding HELOCs to prevent these dangerous attacks. continue reading »