VMware吃掉思科的那一天

自引入其管理程序以来,VMware一直在闲置,只要在数据中心的价值思科产品所提供的价值。雷竞技电脑网站随着私有云的引入和增长,这些科技泰坦之间的“共同选举”增加了不可避免的“共同选举”,而VMware的收购尼科尔可能是思科当前数据中心战略棺材的最终战略指甲。雷竞技电脑网站几年来,我们可能只是在周一回顾VMware Ate Cisco的那一天。

12 第二页
第2页共2页

随着就像我们都看到和听到关于云计算在过去几年中,有时事情的发生你意识到所面临的物联网产业的急剧变化是多么现实真的是化妆。以及由VMware的收购Nicira的是本周的一个时刻。它应该成为一个警钟大家在物联网产业如何为那些来参加你的IT部门的变化做好准备。这已经不是什么秘密,“云”中的日益普及和私有云架构迅速成为内部基础架构的主要建筑风格。虽然VMware已经清楚地赢得了虚拟化的企业工作负载的战斗,云控制器,如VMware的vCloud Director的战斗,现在是激烈。所有计算正在发生急剧转变,因为云是不断发展的,这已经引起了“Betamax的”风格战争给整个行业进行斗争。计算机行业已经通过在其历史上具有划时代意义的众多变化,因此我们意识到,随着云计算的成熟,最终仍会进行整合,并占主导地位的建筑风格将会出现。不同的厂商将最终收敛,每个都有自己的倾斜,围绕这些常见的样式。而现在,几乎所有的主要厂商都使他们在云的优势发挥,努力让自己的架构,成为新时代的标准。关键战役是VMware正在面临现在不与思科,而是与微软,思杰,OpenStack的,的CloudStack,亚马逊,甚至谷歌。 At one point, enterprise private cloud infrastructure seemed like it may have evolved in a way that was more disconnected with cloud service providers; however, a lot of different technical and economic factors have helped to shape the current state of the cloud market. As the significance of computing in all aspects of business continues to grow, rapid application introduction has become increasingly critical at the business level, which is forcing infrastructure standards to morph at a rapid pace. And as XaaS and consumerization continue to explode, the ability for enterprise IT departments to offer infrastructure that is as streamlined, flexible, accessible and inexpensive as XaaS providers is critical ... at least for those that still want to have in-house infrastructure to manage. The battle for private cloud now demands that VMware include a holistic, self-contained solution that provides everything cloud application developers demand. And as enterprise applications that were built for the client-server era are getting re-engineered and purpose built for cloud architecture, they are now emerging with fundamentally different infrastructure demands. Going back to the client-server era, applications were largely built around the ability to run on one big server, scale out methods were primitive and proprietary. The explosion of the internet led to massive improvements in distributed computing. As cloud-style application architectures have emerged, developers have taken advancements in distributed computing to a whole new level. The latest MapReduce applications do truly treat an entire cloud of infrastructure resources as though the cloud were a single system ... and accordingly many of the application-level interactions that used to happen inside of a single computer are now happening across the cloud fabric. And as a result, cloud application developers are far more network savvy than those who have been focusing on enterprise infrastructure alone may be aware of. For the average enterprise IT worker, the intricacies of advanced distributed computing have been largely hidden. For the past 10 years as enterprises have been focused on virtualizing enterprise applications that were built for the client-server era, web providers have been increasing the ranks of IT professionals that understand web-style programming & architecture up to huge numbers. All of these developers have to learn networking for development, and they have to debug their applications across network fabrics. The old concept where the network guy is needed to come in with their sniffer to help debug application problems is really a lot more relevant to legacy applications. In this new world, application developers are already using tcpdump and packet-level analysis tools to debug application streams across a network. Not only is the traditional network guy not needed, but often their skill set is still optimized for legacy applications that only sent more primitive communications across the network. Much like one of the prevailing themes of the past ten years has been around efforts for enterprises to virtualize large percentages of their applications, today the momentum has changed to a very analogous effort ... to move virtualized applications into the highly optimized and automated cloud application lifecycle. In the early days of virtualization, there were a lot of inhibitors limiting virtualization to a small percentage of enterprise applications. With time hypervisor vendors added new features and applications evolved to the point we are at today where most applications can now be virtualized. So the momentum over the next few years will largely be around how many different applications can we stuff into one common private cloud. One of the biggest challenges with this effort is that many different applications, most notably the newest and emerging cloud applications, have distinctly different network and topology requirements. And where it gets really challenging is the need for elasticity ... or for the need for applications to grow and shrink on demand ... and for distributed applications this means dynamic modification of network topologies. So how do you stuff a bunch of applications with disparate topology requirements into a single cloud with a single static topology? ... the same way that you put numerous applications and operating systems onto a single server ... by putting a virtualization layer that shields applications from the complexities of the physical infrastructure. The function of network topology itself is now becoming a network service, and true network virtualization will allow hypervisor environments to provide these virtualized network topologies and services. If you have worked in the networking industry and have any exposure to VMware, it is pretty obvious that the type of virtualization common in the networking industry (VRF/VDC) isnt even in the same league as the type of virtualization that VMware has provided for servers. Ultimately it comes down to this: the requirement from cloud developers is to be able to define network services and behavior dynamically through software ... something the traditional network just can't do. The main job of the private cloud controller is to examine the needs of applications and their changing demands in real time and optimize these across a pool of server, storage and networking resources with the goal of creating the maximum resource utilization to the highest possible levels without impeding application performance. For networking, this idea is a nightmare, it simply cannot function across the the industries antiquated approach to Quality of Service (QoS). And this is THE critical point driving SDN. For private clouds to achieve the key goals of their current growth trajectory, the cloud controller must tightly manage network access and each applications network requirements, this job simply cannot be part of a separately controlled 3rd party solution. And clearly the legacy approach to QoS cannot be extended to this level of demand. Over the past 15 years we have seen the evolution of QoS starting as a model built at a time when application architectures barely resembled what they are today. The networking industry has approached modernizing QoS on an application by application basis, and even with the slow one-app-at-a-time approach, new network-sensitive applications like VoIP and FCoE have taken years to implement. Each of these also has had the benefit of frequently being the only prioritized traffic on a given link, and in the case of VoIP real contention for bandwidth was rare. And today despite years of effort, multivendor/heterogeneous FCoE fabrics still seem like a pipe dream. It is astoundingly clear that this approach will not work for the emerging demands of the cloud. This is exactly why OpenFlow has been so appealing to cloud developers ... while traditional networking devices still have no real awareness of network conditions in their forwarding decisions, even the earliest OpenFlow applications written by grad students showed how powerful the OpenFlow paradigm is in its ability to forward not only based on real time network conditions, but also with real time awareness of application and server availability. This behavior is exactly what cloud-developers are looking for, hence their affection of SDN. Because the traditional network has been abysmal at providing meaningful application services, interfaces or programmability to web application developers, for years application developers have been building patchwork at the application layer to compensate for the inability to communicate with the network. If any Cisco fans have read to this point this statement may upset them, but this was the exact theme of大卫·沃德的演讲在第一个开放式网络峰会上。这引领了在云提供商市场中开放vswitch(Ovs)的路径,这是一个大规模的成功。OVS已经完全流行地利用云提供商如此多,因为OVS内核现在是MainLine Linux的一部分。因为开放的vswitch驻留在虚拟机管理程序中,并且是开源的,它使应用程序开发人员尝试围绕非开发人员友好的传统网络的许多局限性的新方法。因此,过去几年,OVS已经通过绕过传统基础设施的局限性来实现许多世界上最大的网络来提供弹性网络功能。这对网络行业意味着什么?预测hypervisor网络将成为应用程序的域和VMware管理员解决缓慢操作造成的筒仓,这样一个通用团队将处理私有云管理,每一个组成一个私有云技术包括intra-container织物。为什么这对思科不利云计算之战已经进入了思科的一些战略领域。随着云寻求对所有应用程序性能相关特性的全面支持,这意味着传统的访问层和相关的网络服务正在被纳入云管理平台,这严重限制了思科提供增值服务以维持其利润率水平和强大的品牌忠诚度的能力。这种对全面支持的需求意味着VMware不会考虑的关键特性将是那些对基础设施本身有意义的特性,而不是应用程序或工作负载。这极大地限制了UCS的价值主张,损害了一些关键的战略基石,如VN-tag和声名狼藉的Palo。VMware提供独立私有云的举措是建立在与供应商无关的基础设施之上的。不仅仅是任何基础设施,特别是CloudFoundry和其他领先的Iaas/PaaS供应商使用的云优化硬件的新风格,这与UCS架构风格完全不同。有了支持混合云和社区云的强大先例,公共云提供商使用的架构风格对企业最终如何部署其私有云产生了重大影响。虽然企业有独特的需求,不会部署相同的基础设施,但将成为主导的风格将是云提供商基础设施的企业适应版本,而不是像UCS那样从根本上不同的东西。我并不是在这里简单地攻击UCS,它有一些很棒的特性,但最终行业将围绕共同的建筑风格聚合,并且UCS越来越成为一种利基架构。 As hypervisor networking grows and VMware administrators start to become confident in their ability to manage their own virtual networks, physical networking solutions will emerge that are built to have plug-n-play type compatibility to support and strengthen hypervisor networks. This will change the administrative domain that is controlling the cloud fabric to virtualization administrators and application developers and architects. And it is fair to assume that VMware, Microsoft and Citrix will eventually certify different vendors networking hardware further challenging Cisco's dominance. While having to sell to a very different audience in customer environments and support entirely new features in a new and different marketplace are challenges, the biggest challenge for Cisco will be their competition with VMware. Cisco has a tendency to constrain their features to push customers toward purchasing more of their products. So as private clouds continue to approach Cisco's strategic ground and limit the value propositions of Cisco's data center ambitions, I find it unlikely that Cisco will take this lying down. My bet is they will move rapidly to develop advanced features limited to their N1k and UCS customers. I anticipate hearing about how VMware and other private cloud deployments will work much better for those that buy the N1k and UCS, pushing those that want to stick to VMware's roadmap elsewhere. Cisco has already kept crucial features out of their physical networking portfolio to help push their other platforms, and I think unless they drop their competing lines, this type of behavior is expected and natural. And frankly there is nothing wrong with it, but it will open the door for Cisco's competitors to strengthen VMware's native toolset without holding out premium features for UCS and N1k customers. So I am not simply trying to attack Cisco and not their competitors, it just seems clear that Cisco is in a more vulnerable position here. And if Cisco loses key ground in the data center, it will make them more susceptible to attacks from their competitors across the board. I really dont see them keeping the same level of brand loyalty if other switch vendors gain the opportunity to shine in the data center, it will demonstrate clearly that Cisco isnt the only company that can make a switch. While the pace of change across all of technology has been maddening, this acquisition really signifies the cementing of the way that a lot of architecture will evolve in the cloud era, and the vision of the future of networking is now increasingly clear. The private cloud has unique needs and the networking components of each cloud container will become the domain of the private cloud management platform, separate from the rest of the network, and will emerge as a new and distinctly different networking marketplace and ecosystem where an entirely different group of players will control the industry. This move adds substantively to the SDN movement and is among the most powerful evidence to date that SDN will be the way of the future.

另一个关键需求是优化基础设施的效率水平。几年来,随着虚拟化工作日趋成熟,企业VMware管理员一直在努力寻找应用程序的最佳组合,以最大限度地提高服务器上的平均资源利用率。到目前为止,这方面的努力主要集中在CPU、内存和存储利用率的最大化上,而网络在很大程度上获得了通过,因为思科试图提高壁垒,防止VMware管理员渗透到他们的控制领域。然而,最新一代的服务器CPU已经重新关注输入/输出效率,当前的趋势是像检查CPU/ram/存储利用率一样检查网络利用率。这不是一个简单的命题,更有挑战性的是,私有云平台寻求以自动化的方式来完成这一任务。

从历史上看,在企业中,管理程序网络并没有得到非常认真的对待。当VMware首次出现时,快速配置和流程优化具有巨大的业务价值,很少有人谴责传统网络接入层的崩溃。思科一直在努力控制接入层和重要的增值网络服务,所以当客户采用VMware和最初简化的vSwitch时,思科一直致力于让客户采用vn-tag或nexus 1000v (N1k)来保持对接入层服务的控制。而VMware也一直在缓慢地添加新功能,以提供先进的网络功能,与N1k竞争。然而,根据我的经验,企业市场总体上对高级管理程序网络表现出了有限的兴趣,直到最近的进步表明该领域是一个真正的威胁。随着VXLAN协议的发布,虚拟机监控程序的网络空间得到了巨大的提升。VXLAN协议是一种隧道技术,允许VMware完全绕过物理网络的许多约束。虽然VXlan公告是重要的,但不是很清楚积极VMware将追求hypervisor网络空间,但是现在看着VMware的高级特性的最新分布式开关,VXlan,现在Nicira的十亿美元收购,显然这是是VMware的关键战略领域。

我们可以希望看到管理程序网络将从其忽视状态中出现,以成为数据中心网络的新宠。雷竞技电脑网站雷竞技电脑网站数据中心网络本身也将分为单独的市场,特别是管理程序网络市场将越来越不同于传统的网络市场。连接私有云的内部的织物现在与计算机的主板类似,并将与管理程序市场的母版演变,在计算集群和连接不同计算资源集群的织物中产生越来越多的织物的特点(AKA栅格或云容器)在一起。

有一些明显的,但也有许多不太明显的原因,为什么这对思科不利。最明显的是思科的Nexus 1000V,现在将面临来自VMware的一些非常严重的竞争。然而,尽管如此,思科在这里没有竞争VMware的竞争,他们没有竞争竞争云控制器。因此,企业管理程序网络空间将成为VMware,Microsoft,Citrix和较小的玩家之间的战斗的一部分,如桉树,Ubuntu和活塞等较小的玩家。

我应该注意到我是戴尔员工,但这是我的个人博客和我的个人观点,并不一定反映戴尔的立场。

加入网络世界社区有个足球雷竞技appFacebookLinkedIn评论是最重要的主题。

版权©2012Raybet2

12 第二页
第2页共2页
IT薪水调查:结果在