<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ACE Co-innovation Ecosystem</title>
    <description>The latest articles on DEV Community by ACE Co-innovation Ecosystem (@ace_ecosystem).</description>
    <link>https://dev.to/ace_ecosystem</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ace_ecosystem"/>
    <language>en</language>
    <item>
      <title>10 Key Multi-Cloud Strategies: Complete Cloud Cost Visibility</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Thu, 19 Oct 2023 03:26:56 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/10-key-multi-cloud-strategies-complete-cloud-cost-visibility-51ik</link>
      <guid>https://dev.to/ace_ecosystem/10-key-multi-cloud-strategies-complete-cloud-cost-visibility-51ik</guid>
      <description>&lt;p&gt;Author:Dave Rollins, Director of Technical Product Marketing for Cross-Cloud Services at VMware.&lt;/p&gt;

&lt;p&gt;Understanding and controlling your cloud spend is a struggle every business will encounter. According to a recent IDC study, one of the top challenges IT decision-makers face is controlling cloud costs. For most organizations, the issue is not having complete visibility into all their cloud resources. This could be due to other departments going around IT to fund their development projects or a recent M&amp;amp;A leading to adopting a new cloud provider. In any case, organizations have a hard time controlling what they cannot see.&lt;/p&gt;

&lt;p&gt;There are two key areas when it comes to cloud cost:&lt;/p&gt;

&lt;p&gt;Understanding your total cloud spend&lt;br&gt;
Ensuring your workloads are sized correctly&lt;br&gt;
Understanding Your Total Cloud Spend&lt;/p&gt;

&lt;p&gt;With each cloud provider comes a separate set of tools and skills needed to generate resource utilization and cost-based reports on their platform. Retrieving and consolidating meaningful data from each provider takes time to generate and analyze. As a result, these reports are often outdated once they have been compiled. In addition, the report may not be a full picture of what the entire company is spending due to other accounts outside the IT organization’s visibility.&lt;/p&gt;

&lt;p&gt;This is where VMware Aria Cost powered by CloudHealth, can help. VMware Aria Cost is a robust multi-cloud management platform that helps your organization simplify financial management, streamline operations, and improve cross-organizational collaboration, through consolidated visibility and reporting across your entire cloud environment. VMware Aria Cost provides visibility into AWS, Azure, Google, Oracle, Alibaba Cloud, VMC on AWS (currently in beta), and data center environments. Through one tool, you have complete cost visibility into your multi-cloud environment across your entire environment: on-premises, public, hybrid, or multi-cloud.&lt;/p&gt;

&lt;p&gt;The platform ingests and aggregates data from the multiple data streams you use to provide a holistic view of your applications, infrastructure, and business. Using open APIs, VMware Aria Cost seamlessly collects data from cloud providers. It also connects and pulls data from third-party tools you use for application performance management, configuration management, and the like, allowing you to use VMware Aria Cost as your single source of truth for multi-cloud management.&lt;br&gt;
This video provides more details and demonstrates using Aria Cost to understand costs in a multi-cloud environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ensuring Your Workloads are Sized Correctly&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Another important aspect of controlling cloud costs is ensuring your workloads are sized appropriately before and after migrating to a public cloud provider. Often customers will overlook this step and be surprised with a higher cloud bill than they had originally anticipated. In some cases, overprovisioned workloads can be missed in the private data center because there may not be a chargeback model or other costing mechanism associated with it. However, once these workloads are moved into a metered environment, they quickly become apparent, and customers are paying for resources their applications are not consuming. This has led to some customers moving workloads back to their on-premises environment with the notion that the public cloud is too expensive.&lt;/p&gt;

&lt;p&gt;As mentioned, there are two aspects to this. The first is to ensure the workloads migrating to the cloud are right sized before moving them to the cloud. With VMware Aria Cost, organizations can gain visibility into the VMs running in their on-premises data center and receive recommendations on how to size them correctly. Once the VMs are running optimally, a Migration Assessment can be performed at the data center level. Creating a new Migration Assessment will show the current cost to run the workload on-premises and compare moving it to AWS, Azure, and Google Cloud, for both on-demand and reserved resources.&lt;/p&gt;

&lt;p&gt;The second aspect is that once the workload has been moved to the cloud, keeping tabs on the VMs to ensure they continue to be sized correctly. This ensures they run at peak performance and use the most economical instances. With VMware Aria Cost, rightsizing is currently provided for AWS, Azure, and Google Cloud. The platform uses past performance to make recommendations on what the appropriately sized instance would be and the difference in cost. This is a more cost-efficient approach to cloud, allowing you to pick the right instance size to support your applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;VMware Aria Cost helps organizations take control of their cloud spend and their multi-cloud journey. We have only scratched the surface of what VMware Aria Costs offers, and we will explore them in future posts around Unified Management and Operations. To get started and see how your organization can benefit from VMware Aria Cost, sign up for a 14-day trial here.&lt;/p&gt;

&lt;p&gt;In the next post, we will look at a faster way to cloud by utilizing a common infrastructure in private and public clouds. This allows for some unique opportunities to accelerate your cloud journey and find your way to cloud smart.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>vmware</category>
      <category>workload</category>
      <category>aria</category>
    </item>
    <item>
      <title>Use Faster, More Secure Paths to Production Today with VMware Tanzu Application Platform 1.6</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Wed, 11 Oct 2023 09:26:39 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/use-faster-more-secure-paths-to-production-today-with-vmware-tanzu-application-platform-16-bbg</link>
      <guid>https://dev.to/ace_ecosystem/use-faster-more-secure-paths-to-production-today-with-vmware-tanzu-application-platform-16-bbg</guid>
      <description>&lt;p&gt;Author: Denise Martinez, product marketing manager for Tanzu Application Platform at VMware, and is based in San Francisco.&lt;/p&gt;

&lt;p&gt;Tanzu Application Platform is an end-to-end integrated platform that enables companies to build and deploy more software, more quickly and securely, through pre-paved, customizable “golden paths” to production—all on any public cloud or on-premises Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;Tanzu Application Platform 1.6, available today, delivers on its mission to enhance developer and platform engineering team experiences, increase enterprise security, streamline software supply chains, and much more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhancing the developer and platform engineering team experience&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here are new features you can look forward to in this new version of Tanzu Application Platform.&lt;/p&gt;

&lt;p&gt;VMware Tanzu Developer Portal&lt;/p&gt;

&lt;p&gt;Tanzu Developer Portal is an internal developer portal, built on Backstage, that can simplify how enterprise software organizations coordinate, collaborate, and execute across multiple teams and business units. Tanzu Developer Portal has been the developer interface for Tanzu Application Platform since its first release, and now includes a portal configurator tool (currently in beta) and support for plug-in integration (also in beta).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spring Framework 6 native compilation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Spring native images can provide a number of advantages over traditional Java Virtual Machine–based apps:&lt;/p&gt;

&lt;p&gt;Improved startup time, especially for scale-to-zero applications&lt;br&gt;
Lower resource consumption, which can allow organizations to run more applications with the same compute resource, reducing overall spending on infrastructure&lt;br&gt;
Using Tanzu Application Platform tooling, developers can build their Spring applications with native compilation when deployed in production, while continuing to live update and remotely debug their apps in nonnative mode, within their integrated development environments (IDEs).&lt;/p&gt;

&lt;p&gt;Developers can view the live information of natively compiled Spring applications via Application Live View for VMware Tanzu and can do lightweight troubleshooting by inspecting the health of running processes, changing log levels, updating environment properties, and monitoring HTTP request/response traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q30VzFoi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9ngqx1cqkfspbqndp4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q30VzFoi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9ngqx1cqkfspbqndp4i.png" alt="Image description" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;View live information for natively compiled Spring applications via Application Live View for VMware Tanzu.&lt;/p&gt;

&lt;p&gt;Application Live View for VMware Tanzu details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated AppSSO configuration for application workloads&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tanzu Application Platform 1.6 makes it even easier for developers to secure their workloads with AppSSO across environments, in a portable manner. Developers no longer need to consider redirecting URIs for each environment when securing their applications. They can now create one ClassClaim and a workload can be deployed across multiple deployment environments—without requiring separate configurations for enabling SSO in each environment. This simplification of consuming AppSSO enables developers and platform engineers to focus on other parameters for securing workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project creation using App Accelerators in IntelliJ&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers can start a new project in minutes from their preferred integrated development environment (IDE). They can now provision a Git repository when creating a project using accelerators in IntelliJ IDE, and the generated code is pushed to the provisioned repository, eliminating the manual steps of Git repo creation. As projects are created using an accelerator from IntelliJ IDE, an application bootstrapping provenance manifest is generated to provide organizations with early visibility so that they can assess whether applications are conforming to their best practices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fro2ZzlH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5z0vhutxv77kwysvpg7p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fro2ZzlH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5z0vhutxv77kwysvpg7p.png" alt="Image description" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A view of the IntelliJ Application Accelerator plug-in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Visual Studio extension&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Workload panel in Visual Studio now shows deployed workload status, enabling .NET developers to manage and troubleshoot errors.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wx-Oxane--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c5m5qjpy5d60uy1dgu86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wx-Oxane--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c5m5qjpy5d60uy1dgu86.png" alt="Image description" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Workload panel in Visual Studio.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Improved container image registry interaction with Local Source Proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Local Source Proxy provides an intrinsically more secure and user-friendly mechanism for developers to interact with external registries without the knowledge of registry specifics such as endpoints, credentials, and certificates. Developers can focus on their application logic instead of managing container registry details during the development phase, reducing complexity and friction. Some of the benefits of Local Source Proxy include:&lt;/p&gt;

&lt;p&gt;Developers’ ability to deploy a workload from local source code through any mechanism, including IDE extensions, without specifying their source image location or managing their registry credentials.&lt;/p&gt;

&lt;p&gt;Developers are no longer required to have Docker installed on their local machines to do iterative development.&lt;br&gt;
Local Source Proxy is compatible with AWS ECR, including providing an AWS IAM role for ECR authentication.&lt;br&gt;
Reduced burden on platform and operations teams to maintain, rotate, and distribute registry credentials to individual developer workstations.&lt;/p&gt;

&lt;p&gt;The default behavior of Tanzu Application Platform IDE plug-ins, App Accelerators, and the apps CLI has been modified to align with the functionality of the Local Source Proxy.&lt;br&gt;
&lt;a href="https://jira.eng.vmware.com/browse/TANZUSC-3007"&gt;https://jira.eng.vmware.com/browse/TANZUSC-3007&lt;/a&gt;&lt;br&gt;
Developers typically install IDE extensions from the IDE Marketplace. Starting with this release, VMware Tanzu Developer Tools for VS Code and VMware Tanzu Application Accelerator for VS Code will be made available in VS Code Marketplace. Similarly, VMware Tanzu Developer Tools for IntelliJ will be available in IntelliJ Marketplace. Developers can install the extension within their IDEs, potentially a more familiar setting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enterprise security at scale&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secure-by-default server workloads&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers can now create server workloads that are externally exposed to the public internet via a Contour Ingress, and all external HTTP traffic is secured by default with TLS. HTTPS via TLS is autoconfigured for server workloads without developers needing to configure it manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bring your preferred scanner (beta)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Simplifying the process of integrating container image vulnerability scanners in software supply chains has been a core focus of the Tanzu Application Platform 1.6 release. First introduced as alpha in the Tanzu Application Platform 1.5 release, the Supply Chain Security Tools - Scan 2.0 component has been promoted to beta.&lt;/p&gt;

&lt;p&gt;The enhancements in this release focus on enabling the use of the custom scan integrations across the Tanzu Application Platform, including:&lt;/p&gt;

&lt;p&gt;The ability to enable the next-generation image scan component in the out-of-box test and scan supply chain&lt;br&gt;
Scan results now observed and pushed to the metadata store for long-term archival and retrieval&lt;br&gt;
Scan results are now represented in Tanzu Developer Portal, including the Supply Chain Choreographer for VMware Tanzu and Security Analysis GUI plug-ins&lt;br&gt;
The VMware Tanzu team encourages feedback on this next-generation scan interface. If you are interested in sharing your experience, get in touch with your representative or contact us.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Triage CVEs via the Tanzu Insight CLI (alpha)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reduce spreadsheet and tool toil by centralizing CVE scanning, identification, and triaging in one place. Using the Tanzu Insight CLI, customers can now perform basic triaging functions against any detected vulnerabilities: view, update, and clone triage statuses for a specific CVE for Tanzu Application Platform-scanned workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Track SBOMs after every build&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is now possible to extract a software bill of materials (SBOM) for a particular workload build. Previously, customers were only able to generate an SBOM for the latest workload build. Via new Metadata Store API endpoints, customers can download an SBOM from any workload build, enabling them to keep better track of how a workload evolves for faster auditing and security vulnerability remediation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wjY6d63B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/to28gw6k9ud1b1f45p9n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wjY6d63B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/to28gw6k9ud1b1f45p9n.png" alt="Image description" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Track software bills of materials (SBOMs) after every build.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Download SBOMs directly from Tanzu Developer Portal&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Users can now download SBOMs in CycloneDX and SPDX formats directly from Tanzu Developer Portal (in Tanzu Developer Portal Supply Chain, the Image Scan stage). The SBOM is generated by the metadata store and represents the latest SBOM. This capability enables faster vulnerability remediation and compliance.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;App Live View access control for sensitive actions&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Organizations can configure more granular access control for performing sensitive actions such as changing log levels, modifying environment properties, and taking a heap dump from running workloads per user, group, or at workload level, providing finer control for access to sensitive actions especially on production environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamlining the software supply chain&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Save time with automated builds&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The ability of Tanzu Application Platform to produce automated builds based on upstream changes in dependencies used by workloads can improve security posture and can save developers time. This functionality is provided by VMware Tanzu Build Service. Using Tanzu Build Service in a supply chain can further streamline the process by enabling builds provided by Tanzu Build Service to be seamlessly deployed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build Service plug-in for VMware Tanzu CLI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This supply chain automation is helpful, but developers and platform engineers might want to delve deeper into Tanzu Build Service. A developer might need to access more information to diagnose a failed build, or a platform engineer might want to inspect more details about the buildpacks configured in the supply chain or the configurations used when building a workload. The new Build Service plug-in for the VMware Tanzu CLI helps users inspect Tanzu Build Service when they want to peel back the layers of the supply chain abstraction and better understand how this critical piece operates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AIyc-7OH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iag6vtjxmll7wgpt66su.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AIyc-7OH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iag6vtjxmll7wgpt66su.png" alt="Image description" width="800" height="64"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Additional self-signed CA support&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tanzu Application Platform now offers Custom CA support for on-premises Git repositories in supply chains. This is an especially important capability for customers in air-gapped environments, as they run on-premises Git repositories and use their custom signed certificates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Carvel Package Supply Chain enhancements&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Carvel Package Supply Chains now support web, server, and worker workloads (beta). This feature allows Tanzu Application Platform users to create an application artifact (Carvel Package) with any Tanzu Application Platform workload type that is portable from one environment to another.&lt;/p&gt;

&lt;p&gt;Customers can also define custom Carvel Package parameters when using Carvel Package Supply Chain (beta), allowing them to define custom, per-environment configuration for workloads. This gives users the flexibility to deploy a single workload artifact with different, environment-specific runtime configurations, which typically vary between development, test, stage, or prod environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Improved error logging provided by buildpacks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tanzu Application Platform customers can now see improved error logging during the build:&lt;/p&gt;

&lt;p&gt;All builds now show the commands and flags that were inputted into the build, and an output stream in real time.&lt;br&gt;
During the detection phase of the build, customers can now see clear details on why a build failed during this phase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faster, more seamless installation experience&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install across clouds with an improved, simplified installation experience&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tanzu Application Platform can now be powered by a GitOps-based installation that eliminates the need for running multiple commands manually, reduces complexity, and saves time. The GitOps methodology involves declaring a desired state of a system (typically in Git), and a reconciliation process, which directs the actual system (e.g., Kubernetes cluster contents) to converge to the desired state (in Kubernetes, typically done via a controller). The GitOps installation effort has been further enhanced with the integration of HashiCorp Vault external secret operators, as well as support for Azure DevOps (repositories). Customers installing Tanzu Application Platform are now able to drive change to their system by changing the desired state stored in a Git repository. This can greatly simplify the installation process by leveraging the customer’s existing tools. It also helps customers conduct audits and tracing of changes in their environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6bc7eU-6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfkb7zj1xqjm1ukbf6iz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6bc7eU-6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfkb7zj1xqjm1ukbf6iz.png" alt="Image description" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitOps-managed install of Tanzu Application Platform.&lt;/p&gt;

</description>
      <category>tanzu</category>
      <category>vmware</category>
      <category>devops</category>
    </item>
    <item>
      <title>Streamlining Federated Learning Workflows with MLOps Platform</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Tue, 10 Oct 2023 07:56:44 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/streamlining-federated-learning-workflows-with-mlops-platform-4p99</link>
      <guid>https://dev.to/ace_ecosystem/streamlining-federated-learning-workflows-with-mlops-platform-4p99</guid>
      <description>&lt;p&gt;Author: Fangchi Wang, Staff Engineer in the VMware AI Labs of Office of the CTO.&lt;/p&gt;

&lt;p&gt;Federated Learning, or FL, has gained significant attention recently due to its privacy-preserving and communication-efficient approach to applying AI/ML to distributed data. VMware has been actively participating in the FL community by contributing to open source projects, publishing solution whitepapers, and promoting related techniques through various events. Our primary focus is providing secure, robust infrastructure and deployment management solutions for FL systems and workloads, leveraging VMware products and solutions. We are excited to introduce our recent collaboration with One Convergence™ Inc. to integrate Federated Learning into MLOps solutions, particularly the DKube platform, to enhance our customer’s FL workflow with a seamless experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Federated Learning and FATE (Federated AI Technology Enabler)&lt;/strong&gt;&lt;br&gt;
The success of artificial intelligence critically depends on the quantity and quality of data used for training effective prediction models. However, in real-world applications, data often remains isolated in individual data silos. This isolation poses a crucial challenge when it comes to sharing data, primarily due to business competition and the need to comply with privacy-protection laws and regulations such as the General Data Protection Regulation (GDPR). The inability to fully utilize the data thus impedes the training process required to develop meaningful models. To tackle this issue, federated learning has emerged, offering a solution that allows organizations to overcome data silos while ensuring data privacy and security in alignment with regulations.&lt;/p&gt;

&lt;p&gt;FATE, an open source project hosted by the LF AI &amp;amp; DATA Foundation, provides a secure computing framework that underpins the federated AI ecosystem. It has garnered contributions from industry leaders such as WeBank, VMware, Tencent, UnionPay, and many others. Originating from the financial industry, FATE strongly emphasizes privacy preservation and is designed for industrial applications. Its primary objective is to implement secure computation protocols, leveraging advanced techniques such as homomorphic encryption and multi-party computation. By adopting these protocols, FATE enables the utilization of various machine learning algorithms while ensuring robust data privacy and security measures are in place.&lt;/p&gt;

&lt;p&gt;As the TSC (Technical Steering Committee) board member of the FATE community, the VMware AI Labs team has been making significant contributions to the FATE ecosystem, including key features in FATE releases, as well as the creation of cloud-native FL solutions like KubeFATE and FedLCM. To learn more about Federated Learning and VMware’s cloud-native FL efforts, please refer to the following previous blogs:&lt;/p&gt;

&lt;p&gt;Federated Machine Learning: Overcoming Data Silos and Strengthening Privacy&lt;br&gt;
Cloud-Native Federated Learning and Projects&lt;br&gt;
Similar to any other machine learning task, applying FATE and FL involves all typical MLOps workflows. For cloud-native machine learning, Kubeflow is one of the top choices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Implement” Kubeflow with DKube&lt;/strong&gt;&lt;br&gt;
In recent years, Kubeflow has evolved into a leading AI/ML platform, integrating many open-source advancements to create a cost-effective solution. Notably, it has transitioned from a Google project to an independent CNCF project by July 2023.&lt;/p&gt;

&lt;p&gt;However, “implementing” Kubeflow with your preferred cloud or on-prem environment still requires significant work. Deploying Kubeflow successfully, operationalizing your data, model prep, tuning, deployment, and monitoring while managing security, compliance, and governance is still rather challenging. Doing it yourself for every new installation can be many months of work for several people. This is for every new organization and almost every new installation. The productivity and time loss are significant, and all the cost savings of using Kubeflow gets compensated by the increased expense and time that can be in hundreds of thousands of dollars and many months per installation. For this reason, many Kubeflow installation projects at large Fortune 100 companies have stalled.&lt;/p&gt;

&lt;p&gt;But there is some good news on the Kubeflow movement. New AI/ML platforms built from the ground up on top of Kubeflow natively can address this challenge for you. DKube from One Convergence™ Inc., for example, has built a standard Kubeflow package with a better and more modern UI, and it integrates with AWS EKS, Azure AKS, or any cloud or on-prem Kubernetes distributions such as VMware Tanzu Kubernetes Grid. As shown in the graphic below, it integrates with Azure Blob or Azure NFS, AWS S3, On-prem S3/NFS/Ceph storage. It integrates with Active Directory or LDAP authentication in any cloud or on-prem installations. It integrates with Git, GitOps, Bitbucket, and Azure DevOps version control systems. It integrates with healthcare data sources like Arvados or Flywheel. In other words, you get a shrink-wrapped package that, with few simple commands at install and config time, can get you going in AWS, Azure, GCP, or on-prem on a Kubernetes distribution of your choice. From installation start to user onboarded can be as quick as a few hours or a day, depending on the complexity of the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0-pdI0dP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9kzlxtimhwqvzr7yhoqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0-pdI0dP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9kzlxtimhwqvzr7yhoqq.png" alt="Image description" width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accelerating FATE Workflow through DKube Integration&lt;/strong&gt;&lt;br&gt;
Through a collaborative effort between VMware AI Labs and the DKube engineering team, the support for FATE has been integrated into DKube. As shown in the diagram below, FL workflows can be streamlined via DKube IDEs, Runs, and Model Management functionalities upon deploying and configuring FATE systems. In the following sections, we will explore the detailed steps of this integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--enOSwul5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s1uk371hse9dmosvx4bp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--enOSwul5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s1uk371hse9dmosvx4bp.png" alt="Image description" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying and Configuring FATE Clusters&lt;/strong&gt;&lt;br&gt;
As previously mentioned, the VMware AI Labs team maintains two open-source projects, namely KubeFATE and FedLCM, which offer the capability to deploy and manage FATE systems in a cloud-native manner. KubeFATE facilitates the provisioning and management of FATE systems, also known as FATE clusters, on Kubernetes in data centers and multi-cloud environments. And FedLCM orchestrates FATE deployments from a multi-party perspective, enabling the operation and connection of distributed FATE clusters to form the federated learning “federation.”&lt;/p&gt;

&lt;p&gt;Once the FATE federation is created, each participant will use DKube to interact with its FATE system and manage FATE jobs. To enable this functionality, the FATE cluster needs to be added in the Operational View of the DKube UI. Simply navigate to the Clusters page and add the FATE cluster’s FATE-Flow access details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XhAA3Vzi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwzlf9b1ov3zuykbly6h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XhAA3Vzi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwzlf9b1ov3zuykbly6h.png" alt="Image description" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developing FATE Training Code Using DKube IDEs&lt;/strong&gt;&lt;br&gt;
Once the FATE cluster information is added into DKube, we can start working with it in the DKube IDEs tab in the Data Science View. On the IDE creation page, we have the option to select the FATE as the ML framework, which enables the creation of a JupyterLab instance with the FATE client SDK pre-installed. In the configuration section, we can select the newly added FATE cluster so the IDE instance will automatically configure the FATE client SDK to connect to this specific cluster, enabling users to seamlessly write and test their FATE client code and effectively manage data and jobs within the FATE cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UbYXBHPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0vatlpt20buo1qhiejiz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UbYXBHPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0vatlpt20buo1qhiejiz.png" alt="Image description" width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Launching FATE FL Job via DKube Runs&lt;/strong&gt;&lt;br&gt;
Besides interacting with the FATE cluster via DKube IDEs, we can also launch FATE jobs in DKube Runs. Similar to using FATE in the DKube IDEs, we can specify FATE as the framework and the target FATE cluster to execute the job. Moreover, the trained model can be retrieved and saved into DKube for horizontal federated learning. Once a Run is completed, the trained model will be on the DKube Models page, and we can proceed with deploying the model into an online serving service, following the standard DKube model deployment workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pt4DzPNy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/znz1lct0cyx6b63ps2an.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pt4DzPNy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/znz1lct0cyx6b63ps2an.png" alt="Image description" width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DKube IDEs and Runs support all FATE federated learning algorithms, including FATE-LLM, a recently released module enabling parameter-efficient fine-tuning of large language models through the federated learning approach. It has been verified that official FATE-LLM examples can be executed within DKube.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s Next?&lt;/strong&gt;&lt;br&gt;
Besides KubeFATE and FedLCM, VMware has actively engaged with and made substantial contributions to the FL community. One of our notable contributions is the introduction of the FATE-Operator to Kubeflow, enabling FATE management through the operator pattern. We are also collaborating with and contributing to OpenFL, another open-source federated learning project hosted by LF AI &amp;amp; Data. These contributions can integrate into MLOps platforms such as DKube, implementing an end-to-end FL process that covers everything from deployment and operation to freely selecting from a set of different FL frameworks and working with the most suitable one. We continue to work closely with partners to ensure that we bring together the best of each solution and accelerate our customers’ success on their AI/ML journey.&lt;/p&gt;

</description>
      <category>mlops</category>
      <category>vmware</category>
      <category>fate</category>
      <category>dkube</category>
    </item>
    <item>
      <title>10 Key Multi-Cloud Strategies: Assessing Your Readiness for Multi-Cloud</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Tue, 19 Sep 2023 10:08:51 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/10-key-multi-cloud-strategies-assessing-your-readiness-for-multi-cloud-338h</link>
      <guid>https://dev.to/ace_ecosystem/10-key-multi-cloud-strategies-assessing-your-readiness-for-multi-cloud-338h</guid>
      <description>&lt;p&gt;Author: Dave Rollins, Director of Technical Product Marketing for Cross-Cloud Services at VMware. &lt;/p&gt;

&lt;p&gt;One of the biggest mistakes a company can make in its multi-cloud journey is to jump in without a well-thought-out plan. In the previous post introducing the cloud smart journey, the initial phase was Cloud First. Most businesses shifted to the cloud without a thought, ultimately leading them to a state of Cloud Chaos. To create this plan, you will need to understand where you are in your cloud journey and identify any gaps with a plan on how to solve them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--exZDBDAz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/niihcb2ce0taj5z48v6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--exZDBDAz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/niihcb2ce0taj5z48v6r.png" alt="Image description" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Companies that are successful in their cloud journey focus on more than just technology; there are also people and process considerations. In addition, the cloud strategy should align with a company’s business goals. This means the right roles and responsibilities, along with the toolset to support them, are in place to support their company’s objectives.&lt;/p&gt;

&lt;p&gt;Customers can get a clear picture of where their company is on its journey with the VMware Multi-Cloud and App Maturity Model (MCAM) tool and help drive the successful adoption of cloud technologies and services. It assesses where a company is today with multi-cloud and modern apps and what the desired future state would be. The tool utilizes a set of 30 questions across different roles in a company’s organization around the current and future state of maturity. MCAM assesses more than just the infrastructure and application aspects of a business.  &lt;/p&gt;

&lt;p&gt;It focuses on six key domains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Vision and Strategy&lt;/li&gt;
&lt;li&gt;Business Outcomes and Goals&lt;/li&gt;
&lt;li&gt;Leadership, Governance, and Processes&lt;/li&gt;
&lt;li&gt;People, Tools, and Enablement&lt;/li&gt;
&lt;li&gt;Applications and Development&lt;/li&gt;
&lt;li&gt;Infrastructure, Data, and Platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MCAM calculates these details and provides two critical pieces of information:&lt;/p&gt;

&lt;p&gt;A set of recommendations, including different services and resources, to help close the gap between where a company is today and what the desired end-state is.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--flhUZjFI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7otknopo8rwsee7pkaaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--flhUZjFI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7otknopo8rwsee7pkaaa.png" alt="Image description" width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Gap Analysis and Heatmap to help visualize the areas that need the most attention and investment to achieve the desired maturity level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EfPADVZ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sksltkn5l5dxv5t4tuqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EfPADVZ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sksltkn5l5dxv5t4tuqs.png" alt="Image description" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The report and recommendations can be exported and shared with a VMware representative or a VMware Partner.  In addition, VMware Partners can leverage this tool and walk their customers through the assessment and recommend their own services to help close the gaps.  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>10 Key Multi-Cloud Strategies: Introduction</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Tue, 19 Sep 2023 09:47:33 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/10-key-multi-cloud-strategies-introduction-40a</link>
      <guid>https://dev.to/ace_ecosystem/10-key-multi-cloud-strategies-introduction-40a</guid>
      <description>&lt;p&gt;Author: Dave Rollins, Director of Technical Product Marketing for Cross-Cloud Services at VMware. &lt;/p&gt;

&lt;p&gt;As VMware continues to educate customers about the multi-cloud problem and the benefits of VMware Cross-Cloud Services in their journey to cloud smart, one question is commonly asked, “How do I get started?”.  This blog series aims to provide guidance and review the top 10 areas for multi-cloud on where customers should focus. There will also be demos, tools, and other resources to assist you. Before we jump into our first topic, let’s review multi-cloud, Cross-Cloud Services, and the journey to Cloud Smart.&lt;/p&gt;

&lt;p&gt;With 87% of enterprises using two or more clouds 1, having a multi-cloud strategy is critical to staying competitive. As companies have started leveraging multiple clouds, whether through M&amp;amp;A, the need for specific cloud-native services, or developer preference, it’s led to siloed operations, increased costs, and tradeoffs between developer velocity and IT control.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NmbJ0T8S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x93hwd77sjj6ouuk4zjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NmbJ0T8S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x93hwd77sjj6ouuk4zjo.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Approximately a decade ago, as companies started to embrace the public cloud, the mindset was Cloud First. This phase’s big focus was building customer-facing applications, typically in a single cloud.  They started to see the benefits of modern app creation in the public cloud and the ability to get applications to market much faster, giving them a competitive edge. This first wave of cloud innovation drove massive advances in application innovation and velocity. However, controlling those applications and the underlying infrastructure proved difficult for most customers.&lt;/p&gt;

&lt;p&gt;In a survey of VMware customers, many described their current state as “Cloud Chaos.”. Initially, the greater selection of cloud choices was of value for customers, but it ultimately led to a massive spike in complexity. Most report that building new apps is slow and cumbersome, and managing their entire app portfolio across disparate clouds is difficult and expensive. Each cloud requires an organization’s team to use proprietary tools that are siloed and incompatible with other cloud providers. Meanwhile, getting fast, secure access to critical apps from anywhere is imperative for the average employee, yet it is often a challenge.&lt;/p&gt;

&lt;p&gt;The desired destination of every multi-cloud journey is cloud smart. Being cloud smart means taking an architected and planned approach to multi-cloud and digital transformation. With a cloud-smart approach, you have the freedom to select the right cloud for the right application based on its needs. An overwhelming majority of companies taking a cloud smart approach say multi-cloud positively impacts their revenue, helps them retain and recruit top talent and find it easier to manage their data and extract value from it.&lt;/p&gt;

&lt;p&gt;Now that we understand the problems customers are facing with multi-cloud, how do we solve them?  The good news is that this is something VMware is great at, abstraction.  Much like ESX, where we abstract the underlying hardware and can run virtual machines on disparate servers, we can apply the same concept to the multi-cloud problems across all the clouds.  A set of services that can provide an abstraction layer where we need it and a single set of tools to manage the entire multi-cloud environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H7i3gXzU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z7w48s0ea3vdxs8jvycu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H7i3gXzU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z7w48s0ea3vdxs8jvycu.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are five key areas of abstraction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;App Platform – By abstracting the application environment with Tanzu, we build any application for any cloud using whatever app framework developers are familiar with. This is all done in a repeatable, consistent, and secure way.&lt;/li&gt;
&lt;li&gt;Cloud Management – From an operations or management perspective, abstracting this level with tools from Aria, we can deliver a multi-cloud operating model and provide visibility into cloud spend.&lt;/li&gt;
&lt;li&gt;Cloud and Edge Infrastructure – We can run any application on any cloud or edge location by abstracting the infrastructure layer and utilizing VMware Cloud.&lt;/li&gt;
&lt;li&gt;Security and Networking – When we abstract the Security and Networking layer with NSX and Carbon Black, we can provide consistent policies and guardrails regardless of the cloud.&lt;/li&gt;
&lt;li&gt;Anywhere Workspace – This provides anywhere, any device access with the best experience, all in a secure manner.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---HXXbaXG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xuka0qtcqtn1rlm673jp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---HXXbaXG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xuka0qtcqtn1rlm673jp.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you bring these all together, it creates this new layer of abstraction called VMware Cross-Cloud Services. This portfolio of cloud services delivers a unified and simplified way to build, operate, access, and secure any application on any cloud.&lt;/p&gt;

&lt;p&gt;We built the portfolio as an integrated solution, but each is independent, allowing customers to pick and choose the cloud services that deliver the most value to them based on where they are in their cloud journey. Customers are using a combination of these services to power these three transformations: How to rapidly develop and deploy applications; How to accelerate the transition to the cloud; And finally, how to empower their distributed workforce.&lt;/p&gt;

&lt;p&gt;Now that we have reviewed the challenges most customers face with multi-cloud and how to solve them with VMware Cross-Cloud Services, we can start to answer the initial question, “How do I get started?”.  In this blog series, we will do just that.  As the series progresses, this post will be updated with links to future posts. &lt;/p&gt;

&lt;p&gt;The first topic we will cover in this series is assessing your organization, uncovering gaps, and recommending improvements using the VMware Multi-Cloud and App Maturity Model tool.&lt;/p&gt;

&lt;p&gt;Key Strategies Covered to Date:&lt;/p&gt;

&lt;p&gt;Assessing Your Readiness for Multi-Cloud&lt;br&gt;
Complete Cloud Cost Visibility&lt;br&gt;
Common Infrastructure Across Clouds&lt;br&gt;
Assessing Applications&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Enabling AI: Announcing the Ray on Open-Source Plugin</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Thu, 24 Aug 2023 08:14:44 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/enabling-ai-announcing-the-ray-on-open-source-plugin-4966</link>
      <guid>https://dev.to/ace_ecosystem/enabling-ai-announcing-the-ray-on-open-source-plugin-4966</guid>
      <description>&lt;p&gt;Author: Ala Dewberry, Senior Product Manager in xLabs, a product incubation program in OCTO. Sean Huntley, Product Engineer in the Advanced Technologies Group within the Office of the CTO.&lt;/p&gt;

&lt;p&gt;In the last year, there has been an explosive amount of progress in machine learning and artificial intelligence. High-quality generative AI solutions like ChatGPT have ushered in a public interest that has carried over to the business world. Organizations and individuals alike are considering how they can make use of this technology to accelerate their impact and delight their customers.&lt;/p&gt;

&lt;p&gt;While these general-use models are fantastic, practitioners often fall short in industry-specific use cases. Publicly available training data cannot prepare the model for niche expertise needed to address use cases unique to each business. To meet these needs, many organizations are investing in tuning and training their own models. To do so, they need to scale their compute footprint beyond an engineer’s laptop or existing build tooling. Data scientists and ML Engineers need access to both tools that help them scale their workloads and computing resources to match.&lt;/p&gt;

&lt;p&gt;To meet these challenges, VMware is excited to announce our partnership with Anyscale, the creators of Ray. Ray is a distributed Python workload scheduler optimized for ML workloads, bringing serverless-style scaling to training and inferencing workloads. Ray brings broad adoption and excellent performance when it comes to parallel processing and distributed computing.&lt;/p&gt;

&lt;p&gt;Anyscale and VMware have partnered to create an open-source plugin to run Ray on vSphere using virtual machines. This plugin enables system administrators to serve data science teams with compute infrastructure that meets their needs. When data science teams have access to compute to run the workloads that power their data exploration, cleaning, and model experimentation, organizations can reduce the time it takes to go from raw data to a differentiated model that furthers the target business outcome. It’s DevOps all over again, but this time the goal is to ship working models to production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Does it Work?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Ray cluster contains a head node and worker nodes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UPuvNcmh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/95hwf3vtye4acbazs8jl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UPuvNcmh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/95hwf3vtye4acbazs8jl.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The head node manages the cluster and scales the number of worker nodes within it. These distributed worker nodes are responsible for training, fine-tuning, and serving models.&lt;/p&gt;

&lt;p&gt;To get started, the Head Node’s Autoscaler needs to understand how large a cluster it can provision and where it can provision it. It does this with a Cluster Configuration File.&lt;/p&gt;

&lt;p&gt;To make this possible, our plugin extends the Autoscaler to work directly with VMs on vSphere.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h_giq2H3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/568w1k01ckimucaiq3zz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h_giq2H3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/568w1k01ckimucaiq3zz.png" alt="Image description" width="700" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To orchestrate Ray workloads, the Autoscaler plug-in makes calls to a vSphere cluster. A vSphere cluster is a group of hosts where the resources of the host become part of the resources of the cluster. The cluster manages the resources of all hosts within it. Clusters enable vSphere High Availability (HA) and vSphere Distributed Resource Scheduler (DRS). These features ensure that the Ray cluster is fault-tolerant, isolated from other mission-critical workloads and that compute resources are optimally allocated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring a vSphere Provider&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The image below shows a sample Ray Cluster configuration file for use with vSphere. In the provider section, we must specify the type as vSphere and specify credentials for the vSphere cluster and a datastore to deploy the Ray cluster on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bzfCENFc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u9k70yaikbiv2ukcyplk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bzfCENFc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u9k70yaikbiv2ukcyplk.png" alt="Image description" width="800" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, in both the worker node and head configuration, we can target a specific resource pool to isolate Ray workers from other workloads. As a performance improvement, we may also specify a frozen VM. This frozen VM is left frozen to be used as an instant clone to rapidly scale out worker nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s Next&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What we’ve shared today is just step one. We are currently exploring how to capture unutilized compute to train ML models at quiet times in the data center. Enabling organizations to get more value from their data center without endangering production workloads. It’s also great for the planet!&lt;/p&gt;

&lt;p&gt;We are ready to welcome the new age of automation with our Ray on vSphere plugin and streamline access to Machine Learning. Join us on this journey by trying out the plugin once available, joining the Slack channel, or emailing us with questions at &lt;a href="mailto:rayonvmware@vmware.com"&gt;rayonvmware@vmware.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ray</category>
      <category>vmware</category>
      <category>ai</category>
      <category>vsphere</category>
    </item>
    <item>
      <title>Introducing Workspace ONE Unified Endpoint Management multi-user support for Windows</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Tue, 22 Aug 2023 03:43:07 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/introducing-workspace-one-unified-endpoint-management-multi-user-support-for-windows-4l95</link>
      <guid>https://dev.to/ace_ecosystem/introducing-workspace-one-unified-endpoint-management-multi-user-support-for-windows-4l95</guid>
      <description>&lt;p&gt;Author: Pim van de Vis, R&amp;amp;D product engineer for VMware End-User Computing (EUC).&lt;/p&gt;

&lt;p&gt;Since Microsoft launched Active Directory (AD) more than two decades ago, it’s been possible for users to log in to a Windows domain-joined PC with any of their AD user accounts, and the PC would be tailored to their needs. Group policy objects (GPOs) made this possible because they target both the computers and the users. In cases involving shift workers — or shared office PCs — this allowed employee device use to be flexible. GPOs also supported different users sharing the same device, because the device would be personalized upon login.&lt;/p&gt;

&lt;p&gt;With the introduction of Windows 10, Microsoft embedded the Open Mobile Alliance Device Management (OMA-DM) protocol into the operating system. This change allowed Windows to be managed like a mobile device, over the air, which is called Mobile Device Management (MDM). This has become the standard to manage Windows with cloud solutions like VMware Workspace ONE Unified Endpoint Management.&lt;/p&gt;

&lt;p&gt;However, this OMA-DM protocol dates back to the early 2000s, and it was initially designed for mobile phones. Because those are typically personal devices, the shared-device use case was not built into this protocol.&lt;/p&gt;

&lt;p&gt;That means that a Windows device managed with Workspace ONE UEM or Intune — or another MDM product — no longer supports shared device mode. In effect, every user could still log on to a Windows device, but the MDM solution would only manage the computer, not the user. Therefore, the user would not get a personalized experience, and security policies targeted toward the user would not be applied, potentially leaving the device in an unsecured state. The user would also miss his or her personal settings and would need to manually configure the email client, for example. Not the best user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Workspace ONE UEM now supports multi-user scenarios&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the release of Workspace ONE UEM 23.02, this has changed. We now support shared PCs and multi-user scenarios, allowing shift workers to use shared Windows devices. Upon user login, the device will install any user-targeted profiles, policies, applications, and settings, which ensures the device is personalized and secure. For example, Outlook will be pre-configured, an SSO user certificate will be installed, the wallpaper will be set, a VPN client will be configured, and much more.&lt;/p&gt;

&lt;p&gt;Workspace ONE UEM users have been asking for this feature, and we’re proud to announce that it is now available in the 23.02 release — and it is unique in the market.&lt;/p&gt;

&lt;p&gt;This means we can now support shift workers, schools, frontline workers, and shared office spaces — and all other use cases involving PC sharing!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rBHwck7X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uipiabqnh2nzhc4agzvr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rBHwck7X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uipiabqnh2nzhc4agzvr.png" alt="Image description" width="604" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above shows an example of how this functionality works. The Windows device will be managed by Workspace ONE UEM, device-targeted profiles and apps will be shared across users, and at logon each user will have their own personal user-targeted profile and applications installed.&lt;/p&gt;

&lt;p&gt;This functionality has been one of the most requested features for Windows modern management for a while. But because the OMA-DM protocol lacks support for it, this feature required more effort to build because the solution needed to be able to support true Windows multi-user functionality with modern management. We needed to include support for enrollment, user switch, device profiles, user profiles, compliance, app entitlements, and more.&lt;/p&gt;

&lt;p&gt;To explain how this functionality works, I have added the below architecture overview. This diagram shows how a Windows desktop is managed with Workspace ONE UEM. As you can see, next to the OMA-DM protocol, we have added a second channel to communicate with the devices. This is the AirWatch Cloud Messaging (AWCM) service that communicates with the Workspace ONE Intelligent Hub agent. We have already added lots of powerful functions through Intelligent Hub, and now multi-user support is another unique offering in the market for modern management of Windows devices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rEBqNMN1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4t28nyhkmfgwwfmxhauq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rEBqNMN1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4t28nyhkmfgwwfmxhauq.png" alt="Image description" width="604" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The current multi-user support is just phase one of the release, and we intend to extend the functionality in the future. In the current release, applications installed for specific users will remain on the device and can be used by anybody who logs on to the PC. The recommendation is to assign applications at device level to prevent unwanted access to applications.&lt;/p&gt;

&lt;p&gt;The current release supports Azure AD user accounts only. Those can be user accounts that are synchronized from an on-premises AD to Azure AD, but the login name should be the Azure AD username. And devices need to be pre-registered in the Workspace ONE UEM console using the serial number, either manually or through the API for batch processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The future of multi-user support for Workspace ONE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For the next phase of multi-user support, we intend to extend the functionality to support on-premises AD, hybrid, and Azure AD users. Also, the enrollment as a multi-user device will be completed using the Intelligent Hub, removing the requirement to pre-register devices. We’ll also add support for baselines, scripts, sensors, and Freestyle Orchestrator workflows.&lt;/p&gt;

&lt;p&gt;That means the future looks very bright. As excited as we are for this current functionality, we’re also looking forward to providing even more updates in the future.&lt;/p&gt;

&lt;p&gt;Right now, with this release we have been able to bridge a huge gap to meet the needs of many customers who have been waiting patiently. If you want to test out this functionality, reach out to support, because currently the feature is not enabled by default for all customers.&lt;/p&gt;

</description>
      <category>workspaceone</category>
      <category>vmware</category>
      <category>windows</category>
      <category>hybrid</category>
    </item>
    <item>
      <title>Migration Coordinator – In Place Migration Modes</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Tue, 15 Aug 2023 08:28:15 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/migration-coordinator-in-place-migration-modes-2dp</link>
      <guid>https://dev.to/ace_ecosystem/migration-coordinator-in-place-migration-modes-2dp</guid>
      <description>&lt;p&gt;Author: Samuel Kommu, VMware&lt;/p&gt;

&lt;p&gt;In the first part of this blog series, we took a high level view of all the modes that are available with Migration Coordinator, a fully GSS supported tool built into NSX that enables migrating from NSX from vSphere to NSX (NSX-T).&lt;/p&gt;

&lt;p&gt;The second blog in this series, will take a closer look at the available options for in-place migrations, along with the pros and cons of each approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NSX for vSphere: Fixed Topology&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This mode was the very first mode introduced with migration coordinator in the NSX 2.4 release. This mode supports migrating configuration and workloads to NSX, using the same hosts that are running NSX for vSphere. It only needs extra capacity to run the NSX appliances such as the Managers and Edges.&lt;/p&gt;

&lt;p&gt;Locating the mode: Marked in red below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iziW3go9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6h6dbx2zq6wt6y0h96m4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iziW3go9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6h6dbx2zq6wt6y0h96m4.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NSX Prep:&lt;br&gt;
Installation: NSX manager and Edges&lt;br&gt;
Configuration: None&lt;/p&gt;

&lt;p&gt;Pros:&lt;br&gt;
Workload Migration: Built in&lt;br&gt;
Bridging: Built in&lt;/p&gt;

&lt;p&gt;Cons:&lt;br&gt;
Customization options: None&lt;br&gt;
Timing workload migration: No control&lt;br&gt;
Supported topologies: Only 5&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distributed Firewall, Host and Workload&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This mode is useful when the requirement is to migrate only Distributed Firewall configuration.&lt;/p&gt;

&lt;p&gt;Locating the mode&lt;/p&gt;

&lt;p&gt;This mode is under the “Advanced Migration Modes” marked in red below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3UvSfjxQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g6btp2qfd8b95lb1s3g1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3UvSfjxQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g6btp2qfd8b95lb1s3g1.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NSX Prep:&lt;br&gt;
Installation: NSX manager and Edges&lt;br&gt;
Configuration:&lt;br&gt;
Configure the N/S network connectivity and&lt;br&gt;
South bound T0s all the way down to the Segments&lt;/p&gt;

&lt;p&gt;Pros:&lt;br&gt;
Workload Migration: Built in&lt;br&gt;
Bridging: Built in&lt;br&gt;
Customization options: North bound of segment can be customized as required&lt;/p&gt;

&lt;p&gt;Cons:&lt;br&gt;
Timing workload migration: No control&lt;br&gt;
Supported topologies: Any&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NSX for vSphere: User Defined Topology – Complete migration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;User Defined Topology mode is built to merge the simplicity of “Fixed Topology” mode with flexibility of “Distributed Firewall, Host and Workload” mode. In this mode, users still have the flexibility to configure the N/S connectivity and create the T0s. Rest of the configuration, networking and security, can be migrated with the migration coordinator using the mode.&lt;/p&gt;

&lt;p&gt;Locating the mode&lt;/p&gt;

&lt;p&gt;This mode is under User Defined Topology mode.&lt;/p&gt;

&lt;p&gt;Click on the User Defined Topology Mode highlighted in red below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HHeKIH-V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjmtzy0e3xfi7y5j1w9s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HHeKIH-V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjmtzy0e3xfi7y5j1w9s.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then select the first option highlighted in red below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--36ylTpc1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/31pt7vvj9dhs895zonv3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--36ylTpc1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/31pt7vvj9dhs895zonv3.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NSX Prep:&lt;br&gt;
Installation: NSX manager and Edges&lt;br&gt;
Configuration:&lt;br&gt;
Configure the N/S network connectivity and&lt;br&gt;
Create T0s&lt;/p&gt;

&lt;p&gt;Pros:&lt;br&gt;
Workload Migration: Built in&lt;br&gt;
Bridging: Built in&lt;br&gt;
Customization options: N/S connectivity and T0 design&lt;br&gt;
Supported topologies: Any&lt;/p&gt;

&lt;p&gt;Cons:&lt;br&gt;
Timing workload migration: No control&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;NSX Global Manager: User Defined Topology – Complete migration&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
For customers with Cross VCenter Deployments, migration coordinator allows migrating their NSX for vSphere into Federation using the “User Defined Topology – Complete Migration” Mode for an in-place migration approach. This mode is only available via the Global Manager.&lt;/p&gt;

&lt;p&gt;Locating the mode&lt;/p&gt;

&lt;p&gt;On the Global Manager under System -&amp;gt; Migrate, select the NSX for vSphere mode, highlighted in red below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O3GfHQnl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/80mqm39oqxdutne6nxhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O3GfHQnl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/80mqm39oqxdutne6nxhg.png" alt="Image description" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the “Complete migration”, highlighted in red below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YwB0CgIr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ec5980tbr5glittg9f57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YwB0CgIr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ec5980tbr5glittg9f57.png" alt="Image description" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NSX Prep:&lt;br&gt;
Installation: NSX global and local manager, Edges&lt;br&gt;
Configuration:&lt;br&gt;
Configure the N/S network connectivity and&lt;br&gt;
Create T0s&lt;/p&gt;

&lt;p&gt;Pros:&lt;br&gt;
Workload Migration: Built in&lt;br&gt;
Bridging: Built in&lt;br&gt;
Customization options: N/S connectivity and T0 design&lt;br&gt;
Supported topologies: Any&lt;/p&gt;

&lt;p&gt;Cons:&lt;br&gt;
Timing workload migration: No control&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the second part of this blog series, we took a closer look at the in-place migration options available using the Migration Coordaintor. In-place migration modes are designed for cases where preference is to use the existing hardware and letting Migration Coordaintor take care of all the details of the workload migration. Some of these modes also allow flexibility in defining the north / south connectivity.&lt;/p&gt;

&lt;p&gt;In the third part of this series, we will take a look at the lift and shift migration modes.&lt;/p&gt;

</description>
      <category>nsx</category>
      <category>migration</category>
      <category>workload</category>
      <category>vmware</category>
    </item>
    <item>
      <title>Migration Coordinator: Approaches and Modes</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Tue, 08 Aug 2023 07:43:58 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/migration-coordinator-approaches-and-modes-2p67</link>
      <guid>https://dev.to/ace_ecosystem/migration-coordinator-approaches-and-modes-2p67</guid>
      <description>&lt;p&gt;Author: Samuel Kommu, VMware&lt;/p&gt;

&lt;p&gt;Migration Coordinator is a fully supported free tool that is built into NSX Data Center to help migrate from NSX for vSphere to NSX (aka NSX-T). Migration Coordinator was first introduced in NSX-T 2.4 with a couple of modes to enable migrations. Through customer conversations over the years, we’ve worked to expand what can be done with Migration Coordinator. Today, Migration Coordinator supports over 10 different ways to migrate from NSX for vSphere to NSX.&lt;/p&gt;

&lt;p&gt;In this blog series, we will look at the available migration approaches and the prep work involved with each. This blog series should help choose, from multiple different angles, the right mode to choose for migrating from NSX for vSphere to NSX.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3 Standard Migration Modes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S88U5uAU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uwt8qqjldrvts7rt9695.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S88U5uAU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uwt8qqjldrvts7rt9695.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3 Advanced Migration Modes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L6IGJStt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/muk1mplcpf719tsc8wez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L6IGJStt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/muk1mplcpf719tsc8wez.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3 More Modes Available Under User Defined Topology&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zChewlR3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uahs4ujcucuaoy9h1ex6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zChewlR3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uahs4ujcucuaoy9h1ex6.png" alt="Image description" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lastly, 2 more Modes Dedicated to Cross-VC to Federation and available on NSX Global Manager UI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PrIVW3yP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orkom846r08z35aa484k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PrIVW3yP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orkom846r08z35aa484k.png" alt="Image description" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some of these modes take a cookie-cutter approach and require very little prep work to migrate while others allow you to customize the migration to suit their needs. In this blog, we will take a high level look at these modes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Migration Coordinator Approaches&lt;/strong&gt;&lt;br&gt;
At a high level, Migration Coordinator supports two kinds of migration approaches.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In-Place&lt;/li&gt;
&lt;li&gt;Lift and Shift&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;In-Place Migration&lt;/strong&gt;&lt;br&gt;
In-place migrations modes migrate from NSX for vSphere to NSX, using the same hardware that NSX for vSphere is running on. In this migration mode, there is no requirement to bring in new hardware. If there is enough capacity to run the required NSX infrastructure—such as the NSX managers, edges, and more—Migration Coordinator can be used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Config migration&lt;/strong&gt;&lt;br&gt;
These modes generally migrate everything, configuration, and workloads. One exception is the mode, “Distributed Firewall, Host and Workload Mode”, which is preferred by customers who only want to migrate the Distributed Firewall-related configuration and not the entire configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workload migration&lt;/strong&gt;&lt;br&gt;
In the In-Place migration modes, Migration Coordinator takes care of moving the workloads from NSX for vSphere to NSX. There is no need for a separate tool or approach for workload migration. These modes also support a built-in mechanism that allows workloads on the NSX for vSphere to talk to workloads on NSX, during the migration. This mode doesn’t need to create any bridges between the two environments, NSX for vSphere and NSX.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bridging&lt;/strong&gt;&lt;br&gt;
Customers with smaller deployments may be able to migrate everything in a single maintenance window. For those with larger deployments, migration may take a longer time with workloads spread across both NSX for vSphere and NSX. Migration coordinator’s In-Place migration modes allow the workloads to talk to each other without any drop in either network connectivity or security posture.&lt;br&gt;
In these modes, Migration Coordinator fully controls the timing of the actual workload migration. While this approach has been leveraged by many customers, some who are dealing with multiple tenants with their own timings may prefer granular control over when a workload is migrated. For such use cases, check the second high-level approach of migration, lift, and shift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lift and Shift Migration&lt;/strong&gt;&lt;br&gt;
Lift and shift migration modes migrate NSX for vSphere from one set of hardware to a new NSX instance that’s installed on a completely different set of hardware which may be new or repurposed from NSX for vSphere. These modes are generally preferred by those who are in the middle of a hardware refresh cycle or prefer to have granular control over when each workload is migrated and the northbound connectivity and design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration migration&lt;/strong&gt;&lt;br&gt;
These modes migrate the configuration of T0s and the NSX entities such as DFW rules that are south of T0s. Northbound configuration of NSX—i.e. BGP connectivity, etc.—should be prepared by the users in advance, before running the migration leveraging Migration Coordinator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workload migration&lt;/strong&gt;&lt;br&gt;
In these modes, Migration Coordinator does not migrate the workloads. In all of the modes, you will be able to migrate the workloads using vMotion.  In one specific mode discussed later in this blog, “User Defined Topology: Configuration and Edge Migration Mode,” the user has the choice to either leverage vMotion or HCX.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bridging&lt;/strong&gt;&lt;br&gt;
In the lift and shift modes, one may need to set up a bridge using either NSX bridge or HCX to ensure there is no drop in network connectivity during migration. This depends on the duration of the migration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Migration Modes&lt;/strong&gt;&lt;br&gt;
With that intro, the following are the migration modes available with Migration Coordinator under the two high-level approaches of (1) In-Place and (2) Lift and Shift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In-Place Modes&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;NSX for vSphere: Fixed Topology&lt;/li&gt;
&lt;li&gt;NSX for vSphere with vRealize Automation:Is similar in approach to the first mode, “NSX for vSphere: Fixed Topology” but with a lock-step approach with vRA.&lt;/li&gt;
&lt;li&gt;Distributed Firewall, Host, and Workload&lt;/li&gt;
&lt;li&gt;NSX for vSphere: User-Defined Topology – Complete migration&lt;/li&gt;
&lt;li&gt;NSX Global Manager: User-Defined Topology – Complete migration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Lift and Shift Modes&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Distributed Firewall&lt;/li&gt;
&lt;li&gt;NSX for vSphere: User-Defined Topology – Configuration migration&lt;/li&gt;
&lt;li&gt;NSX for vSphere: User-Defined Topology – Configuration and Edge migration&lt;/li&gt;
&lt;li&gt;NSX Global Manager: User-Defined Topology – Configuration migration&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Introducing New Networking and Advanced Security Capabilities in NSX 4.1</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Thu, 03 Aug 2023 07:27:13 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/introducing-new-networking-and-advanced-security-capabilities-in-nsx-41-5aoi</link>
      <guid>https://dev.to/ace_ecosystem/introducing-new-networking-and-advanced-security-capabilities-in-nsx-41-5aoi</guid>
      <description>&lt;p&gt;Author: NSX Team, VMware&lt;/p&gt;

&lt;p&gt;We’re delighted to announce the general availability of VMware NSX 4.1, a release that delivers new functionalities for virtualized networking and advanced security for private, hybrid, and multi-clouds.  This release’s new features and capabilities will enable VMware NSX customers to take advantage of enhanced networking and advanced security, increased operational efficiency and flexibility, and simplified troubleshooting.&lt;br&gt;
Read on to discover the key features in the latest NSX release.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay Ahead of Threats and Safeguard our Network&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Uncover Every Threat&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NSX 4.1 introduces a new feature that allows the sending of IDS/IPS logs from the NSX Gateway firewall (GFW) to our Network Detection and Response (NDR), which is part of VMware NSX Advanced Threat Prevention (ATP). This new functionality is complementary to our existing NSX Distributed Firewall (DFW), which has had IDS/IPS logs sent to the NDR for quite some time now. With this new feature, NSX 4.1 customers can gain a more comprehensive view of network activity, allowing faster and more effective responses to threats. By analyzing IDS/IPS logs from GFW and DFW in combination with our Network Traffic Analysis (NTA) and Sandboxing, our NDR system can correlate events and identify attack patterns, providing a complete picture of the threats being launched against the network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secure Windows 11&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NSX 4.1 introduces NSX Guest Introspection support for Windows 11, providing advanced threat detection and remediation for virtual machines running the latest version of Microsoft’s operating system. This is in addition to support for previously supported Windows versions and a range of Linux-based operating systems. NSX Guest Introspection uses a thin agent driver inside VMware Tools, to provide real-time information about the state of virtual machines, allowing for highly effective security measures. With NSX 4.1, customers can take advantage of the latest security features and enhancements while maintaining support for a wide range of operating systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamline Container Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enhance container security and policy enforcement with true centralized management of firewall rules from our latest improvements to Antrea and NSX Integration. With NSX 4.1, firewall rules can be created with both Kubernetes and NSX objects, and dynamic groups can also be created based on NSX tags and Kubernetes labels. Additionally, this release allows for the creation of firewall policies that allow or block traffic between Virtual Machines and Kubernetes pods in one single rule. Firewall rules can also be applied to endpoints which include both NSX and Kubernetes Objects. NSX 4.1 also includes Traceflow and UI improvements which allow for improved troubleshooting and provide true centralized management of Kubernetes network policies via NSX.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AhAfvx3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nk3h6r2u7qz5qgyw6ll8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AhAfvx3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nk3h6r2u7qz5qgyw6ll8.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 3 Networking Enhancements&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IPv6 Support&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In NSX 4.0, we introduced IPv6 based Management Plane that supported IPv6 communication from external systems to the NSX management cluster (Local Manager only). This included NSX Manager support for dual-stack (IPv4 and IPv6) in the external management interface. With NSX 4.1, we introduce IPv6 support for Control-plane and Management-plane communication between Transport Nodes and NSX Manager. The NSX manager cluster must still be deployed in dual-stack mode (IPv4 and IPv6) and will be able to communicate with Transport Nodes (ESXi hosts and Edge Nodes) over IPv4 or IPv6. When the Transport Node is configured with dual-stack (IPv4 and IPv6), IPv6 communication is preferred. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zkBsNPBi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9sjuyglsz54gqj7t14mf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zkBsNPBi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9sjuyglsz54gqj7t14mf.png" alt="Image description" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inter-VRF Routing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This release introduces a more advanced VRF interconnect and route leaking model. Users will be able to configure inter-VRF routing using easier workflows and fine-grained controls by importing and exporting routes between VRFs. Tenants in different VRFs have total control over their private routing space and can decide independently which routes they want to accept or advertise.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increased Operational Efficiency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-Tenancy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NSX 4.1 introduces multi-tenancy constructs to enable flexible resource allocation and management that increases operational efficiency. The Enterprise Admin (Provider) can segment the platform into Projects, giving different spaces to different tenants while maintaining visibility and control. This extension to the NSX consumption model allows NSX users to consume their own objects, see alarms related to their own configurations, and test connectivity between their workloads with Traceflow. Users can switch context from one Project to another according to the user RBAC. Users tied to specific Projects only have access to their own Projects. Logs can be attached to a Project using a “Project short log id” which can be applied to the Gateway Firewall logs and the Distributed Firewall logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CmxygllG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jmlc2gpdk3asmvphkulj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CmxygllG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jmlc2gpdk3asmvphkulj.png" alt="Image description" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Online Diagnostic System&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NSX 4.1 introduces Online Diagnostic System, a new feature that will simplify troubleshooting and help automate the debugging process. This system provides predefined runbooks which contain debugging steps to troubleshoot specific issues. These runbooks can be invoked by API and will trigger debugging steps using CLI, API, and scripts. Recommended actions are provided post-debugging to fix the issue and artifacts generated related to the debugging can be downloaded for further analysis.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4efxhsVb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fregq39yn18iu53sp1t8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4efxhsVb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fregq39yn18iu53sp1t8.png" alt="Image description" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The NSX 4.1 release offers key updates and enhancements across NSX use cases for private, public, and multi-clouds, enabling you to continue accelerating the delivery of value to your organization. The release is generally available — check out the Release Notes  (&lt;a href="https://docs.vmware.com/en/VMware-NSX/4.1.0/rn/vmware-nsx-410-release-notes/index.html"&gt;https://docs.vmware.com/en/VMware-NSX/4.1.0/rn/vmware-nsx-410-release-notes/index.html&lt;/a&gt;) covering all features and capabilities delivered. Follow us on Twitter @vmwarensx (&lt;a href="https://twitter.com/vmwarensx"&gt;https://twitter.com/vmwarensx&lt;/a&gt;) and LinkedIn (&lt;a href="https://www.linkedin.com/company/vmware-networking-and-security/"&gt;https://www.linkedin.com/company/vmware-networking-and-security/&lt;/a&gt;) for updates, and stay tuned for additional blogs on the key capabilities and features in NSX 4.1.&lt;/p&gt;

</description>
      <category>vmware</category>
      <category>nsx</category>
      <category>security</category>
      <category>networking</category>
    </item>
    <item>
      <title>NSX Multi-Tenancy Journey</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Wed, 02 Aug 2023 09:00:05 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/nsx-multi-tenancy-journey-52ed</link>
      <guid>https://dev.to/ace_ecosystem/nsx-multi-tenancy-journey-52ed</guid>
      <description>&lt;p&gt;Author: Thomas Vigneron, VMware&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From Data-plane Multi-tenancy to a Complete Multi-tenancy Framework&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We are delighted to announce Projects in NSX, a new feature that enables granular resource management for multiple tenants within NSX deployments. &lt;/p&gt;

&lt;p&gt;Projects takes multi-tenancy support in NSX to the next level by delivering flexible resource allocation and management. Enterprise Admins can segment the platform into Projects, assigning different spaces to different tenants while maintaining full visibility and control. This extension to the NSX consumption model allows NSX users to consume their own objects, see alarms related to their own configurations and test connectivity between their workloads with Traceflow. &lt;/p&gt;

&lt;p&gt;This post provides an overview of new multi-tenancy features in NSX, explaining how they have evolved from traditional data-plane multi-tenancy (which remains supported) to the new multi-tenancy framework based on Projects (which admins can optionally leverage). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data-plane Multi-tenancy – Routing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before discussing the new multi-tenancy features that Projects introduces, let’s go over how multi-tenancy has traditionally been available at the data-plane layer. &lt;/p&gt;

&lt;p&gt;NSX supports a multi-tiered routing model with logical separation between the different gateways within the NSX infrastructure, giving complete control and flexibility over services and policies. This model enables simple and stable interconnection in the data center as well as automation of complex, potentially isolated application environments.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Tier-0 Gateway provides a gateway service between the logical and physical network. It is traditionally set up with dynamic routing and/or services.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Tier-1 Gateway provides a tenant or an application router with a range of services (NAT, GFW, DNS forwarder, etc.). NSX manages their connection and route distribution to the Tier-0 Gateway.  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b8X9hCvM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v8xrbkxbnzbpgcziluvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b8X9hCvM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v8xrbkxbnzbpgcziluvj.png" alt="Image description" width="725" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extended Networking Capabilities&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are multiple ways to extend this model for additional segmentation.&lt;/p&gt;

&lt;p&gt;In the image below, you can see multiple Tier-0s in the same NSX Deployment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tenant A can be mapped to Tier-0 A and underlying Tier-1s&lt;/li&gt;
&lt;li&gt;Tenant B can be mapped to Tier-0 B and underlying Tier-1s&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SRc-R0pA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/md4p8ph1yuqud0khi7bm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRc-R0pA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/md4p8ph1yuqud0khi7bm.png" alt="Image description" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This configuration would be used to propagate networking segmentation from the data center in NSX, but it requires a different set of Edge Nodes for different environments.&lt;/p&gt;

&lt;p&gt;With the introduction of Tier-0 VRF, this requirement is no longer necessary. Tier-0 VRF Gateways are hosted on a Parent Tier-0 Gateway on the Edge Node. As the image below shows, we can implement the following configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tenant A can be mapped to Tier-0 VRF A and underlying Tier-1s&lt;/li&gt;
&lt;li&gt;Tenant B can be mapped to Tier-0 VRF B and underlying Tier-1s&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---RzKDO5i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6uj3b7xxn1kpbyc70me.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---RzKDO5i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6uj3b7xxn1kpbyc70me.png" alt="Image description" width="760" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The use cases for Tier-0 VRF are further extended with the introduction of EVPN, which simplifies North-South configuration at scale by removing the need to have per-VRF routing configurations.&lt;/p&gt;

&lt;p&gt;You can check out this blog on multi-tenancy data-center with NSX EVPN to learn more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data-plane Multi-tenancy – Distributed Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NSX also offers distributed security, allowing for the isolation of workloads (VMs or containers) and control of the traffic between them. Because security is handled within the vNIC, isolation remains possible no matter the networking architecture. You get the same security protections regardless of whether the VMs are on the same host or the same subnet.&lt;/p&gt;

&lt;p&gt;This powerful capability allows you to group workloads and create rulesets based on a variety of attributes, from OS to line of business characteristics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R7qUmShd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9uidcgqfkbowysi1qf5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R7qUmShd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9uidcgqfkbowysi1qf5g.png" alt="Image description" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This also means that a single rule pushed from NSX can be applied to all workloads in the environment, which increases the value of being able to manage plane multi-tenancy. Delegation can quickly become inefficient and complex without segmentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud Management Platforms&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The model shown below allows a provider to set up the Tier-0 Gateway, define how it connects to the network, and expose the creation of Tier-1s through a Cloud Management Platform (such as Aria Automation, OpenStack or vCloud Director).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kZX_ucPZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8bth2r5qkncb75g4xscy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kZX_ucPZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8bth2r5qkncb75g4xscy.png" alt="Image description" width="800" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tenancy is achieved from a data-plane perspective through NSX and from a management plane perspective through a Cloud Management Platform, which isolates the different environment configurations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Introduce a Multi-tenancy Framework in NSX?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the models previously discussed, you can see how NSX allows users to apply the desired data-plane segmentation. However, prior to the release of NSX 4.1, tenants were not explicitly defined in NSX. This logic was completed by either the NSX Administrator or the Cloud Management Platform.&lt;/p&gt;

&lt;p&gt;What if a security team wanted to delegate management of firewall rules, requiring role-based access to NSX? What if the same users on that team wanted to see only the alarms relevant to their environment? Or if they wanted to collect only their assigned firewall logs within their tenant?&lt;/p&gt;

&lt;p&gt;These are just some theoretical scenarios that highlight the challenges teams face; from a management and monitoring perspective, there exists a clear need for multi-tenant constructs in NSX.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introducing Projects&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Multi-tenancy in NSX 4.1 is made possible by the introduction of Projects in the platform.&lt;/p&gt;

&lt;p&gt;The Enterprise Admin (Provider) can segment the platform into defined Projects. These Projects delegate users to different spaces and tenants with their own objects, configurations, VMs, and monitoring (based on alarms and logs).&lt;/p&gt;

&lt;p&gt;Projects exist alongside the traditional data model, are optional to use, and do not break compatibility with existing setups in any way. The Enterprise Admin can still access all features outside of the Project (from system setup to firewall rules) but can use Projects to define tenants for logical consumption, if desired.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mqmw8CPF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sd2ridraol0qps32wunv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mqmw8CPF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sd2ridraol0qps32wunv.png" alt="Image description" width="800" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Provider View: Creating and Managing Projects&lt;br&gt;
From NSX 4.1.0 onwards, Projects are available front and center in the NSX UI within the drop-down menu at the top of the screen. When accessing the platform, the Enterprise Administrator will be logged into the Default space indicated by the drop-down menu.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RmArdImt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fot06pcewojejaui6zu0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RmArdImt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fot06pcewojejaui6zu0.png" alt="Image description" width="624" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Default space, Enterprise Admins can have a consolidated view of all Projects or switch to view a specific Project. They can also create multiple tenants with different Projects (Project 1, Project 2, etc.). To do so, they must allocate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At least one Tier-0 or Tier-0 VRF (Multiple supported)&lt;/li&gt;
&lt;li&gt;At least one Edge Cluster (Multiple supported)&lt;/li&gt;
&lt;li&gt;The User(s) allocated to the Project&lt;/li&gt;
&lt;li&gt;A short log ID to be labeled on logs pertaining to the Project (limited to security logs in NSX 4.1.0)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is important to note that Tier-0/Tier-0 VRF and Edge Clusters can be shared across Projects if desired by the Enterprise Admin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---yvZQAVm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iqiihf7n2i1qiaampx7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---yvZQAVm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iqiihf7n2i1qiaampx7f.png" alt="Image description" width="624" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once assigned, Project Users for Project 1 can directly access NSX within the scope defined for them. They can also create configurations deployed on the allocated Edge clusters, which can connect to the allocated Tier-0 or Tier-0 VRF.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DptU62to--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/davtfr8j1ir8dr7g14eb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DptU62to--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/davtfr8j1ir8dr7g14eb.png" alt="Image description" width="800" height="711"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Enterprise Admin can also assign configuration of the Project to simplify consumption or limit the number of objects created through Quotas. The image below shows an example of assigned Quotas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KxYEC-JO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jsnxecm6qtjxrn0gh60x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KxYEC-JO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jsnxecm6qtjxrn0gh60x.png" alt="Image description" width="624" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Enterprise Admin can create firewall rules system-wide that will apply to all VMs across all environments. Those rules are configured from the Default space and are unable to be modified within tenants.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tenant View: Projects Consumption&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the Projects have been set up, access can be delegated to the tenant. The Enterprise Admin can assign a generic role of Project Admin or use something more targeted such as Security Admin of Project 1 or Network Operation of Project 1. The tenant can consume NSX via the UI or API.&lt;/p&gt;

&lt;p&gt;Upon logging in, users will land directly in their assigned Project and see only configurations, alarms, VMs and so on that are relevant to their Project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IXTrlyq3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1pfbslitx6sefudx5jpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IXTrlyq3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1pfbslitx6sefudx5jpg.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configuration will be restricted to logical objects. Tenants cannot manage the platform setup (installation, upgrades, etc.) because these features are kept under Enterprise Admin management. Other features that remain under Enterprise Admin management and are not exposed to the tenant include Tier-0 configuration and Exclusion lists.&lt;/p&gt;

&lt;p&gt;List of features made available under Projects.&lt;/p&gt;

&lt;p&gt;Networking in Project&lt;br&gt;
For exposed features, the consumption under Projects is the same as it would be outside the Project. Creation of Tier-1s, segments, and other configuration follows the same model, and they use the allocated Tier-0(s)/Tier-0 VRF(s) and Edge Cluster(s). Information on allocated resources (Quota) is available in the Project tab.&lt;/p&gt;

&lt;p&gt;For exposed features the consumption under Project is the same as outside the Project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AIzjCHbp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pn9lnc1jqjf6sbap4u7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AIzjCHbp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pn9lnc1jqjf6sbap4u7j.png" alt="Image description" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Security in Project&lt;br&gt;
One of the primary goals of the Project feature is to be able to delegate security policy management and avoid the risk of rules being applied to the wrong VMs.&lt;/p&gt;

&lt;p&gt;When a Project is created, a group representing the Project is also created, alongside some default rules allowing for communication inside the Project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OTyscQDV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/li884tqa1dewfrrq2anm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OTyscQDV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/li884tqa1dewfrrq2anm.png" alt="Image description" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Project Admin can manage their own rules by changing the default rules, creating new rules and so on. These rules will only apply to VMs connected to the segment for their Project. All the other VMs (not connected to Project segments) won’t be visible from the Project and won’t be impacted by rules configured within the Project.&lt;/p&gt;

&lt;p&gt;It is now possible to give access to NSX Distributed Firewall while removing the risk that a user could create a rule impacting the entire system.&lt;/p&gt;

&lt;p&gt;As mentioned, rules defined by the Enterprise Admin in the Default space can apply to those VMs within a Project and will take precedence. This allows the Enterprise Admin to set up the environment to create global rules that apply to all workloads, or to specific Projects. These global rules cannot be modified by Project users.&lt;/p&gt;

&lt;p&gt;Logs from Distributed Firewall and Gateway Firewall will be labeled with the Project information so that they can be identified and separated by the tenant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Multi-tenancy within NSX has been available at the data plane layer for several years, but the introduction of Projects supercharges the operational efficiency of this capability. Enterprise Admins now benefit from much broader and more flexible controls over multi-tenant configurations when configuring role-based access controls, Quotas, Shares and much more. At the same time, tenants can manage their own resources and configurations more efficiently via capabilities like tenant-aware logs and alarms.&lt;/p&gt;

&lt;p&gt;Again, Projects is an optional feature; you can continue to manage multi-tenancy at the data plane layer alone if desired and doing so may make sense for simpler use cases where your primary goal is to achieve logical separation between gateways. But for more complex use cases, Projects adds substantial flexibility that makes it easier than ever to create and manage multi-tenant deployments.&lt;/p&gt;

</description>
      <category>vmware</category>
      <category>nsx</category>
    </item>
    <item>
      <title>Top 5 ways improved visibility enhances performance and user experience in the Horizon Cloud next-gen platform</title>
      <dc:creator>ACE Co-innovation Ecosystem</dc:creator>
      <pubDate>Tue, 25 Jul 2023 05:48:29 +0000</pubDate>
      <link>https://dev.to/ace_ecosystem/top-5-ways-improved-visibility-enhances-performance-and-user-experience-in-the-horizon-cloud-next-gen-platform-j0i</link>
      <guid>https://dev.to/ace_ecosystem/top-5-ways-improved-visibility-enhances-performance-and-user-experience-in-the-horizon-cloud-next-gen-platform-j0i</guid>
      <description>&lt;p&gt;Author: Nilesh Deo, a member of the End-User Computing Product Marketing team supporting the VMware Horizon product. Debasis Patra, part of Horizon Cloud Service — Product management team at VMware driving the observability, monitoring, and notifications aspects of the product.&lt;/p&gt;

&lt;p&gt;We launched our VMware Horizon Cloud next-gen platform at VMware Explore US in 2022, which provides customers delivering virtual desktops and apps with a modern, new hybrid Desktop-as-a-Service (DaaS) architecture built around lowering costs and improving scalability. Following the initial release, at VMware Explore Europe 2022 we announced our intent to support hybrid cloud deployments, which we recently delivered with Horizon 8 in our 2303 release. Now we are excited to release new monitoring improvements for the Horizon Cloud next-gen environments. Let’s look at five of the top updates we have made to improve visibility across virtual desktops, app performance, and user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Comprehensive visibility of resource usage and infrastructure monitoring&lt;/strong&gt;&lt;br&gt;
For IT admins, managing DaaS infrastructure can be complex and time-consuming if you do not have the right level of information available. Now from the Horizon Universal Console you can get detailed infrastructure and resource usage information for your VDI components available in Microsoft Azure environments. You have a wealth of information at your fingertips — including resource utilization, session information, VM usage, and infrastructure errors — that can be used for alerting. For example, knowing that some users are experiencing high CPU usage on their virtual desktops can help IT admins to identify which processes are consuming CPU on those desktops and then remediate the problem. All this information is consolidated in a default homepage view, and through filtering you can get relevant information by provider type or edge.&lt;/p&gt;

&lt;p&gt;Furthermore, you can monitor health, usage, and topology information for critical VDI infrastructure components that are customer managed with Horizon 8 environments connected to Horizon Cloud next-gen. Infrastructure monitoring for connection servers and Unified Access Gateway (UAG) is now available for Horizon 8 deployments. This feature provides a single pane of glass for all the infrastructure and resource usage information, to understand which resources are being consumed, how they are performing, and if they are underutilized. This will help you make decisions such as balancing resource capacity to optimize your infrastructure cost without sacrificing user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Simplified connection server monitoring&lt;/strong&gt;&lt;br&gt;
The Horizon Universal Console provides customers with a simplified way to manage desktops and apps whether from Horizon 8 pods or hosted natively on Microsoft Azure. (To learn more about how to deploy the Horizon Edge Gateway for Horizon 8, refer to this TechZone article.) With connection server monitoring specifically for Horizon 8 environments, IT admins can monitor the status of their connection servers from the “Edge details/Infrastructure monitoring” section under “Capacity” in the Horizon Universal Console (see the images below). This allows you to gather information about connection server availability status and certificate details very easily. You can simply double-click into each connection server appliance health for details such as VM CPU, memory, number of connections, and users connected. Additionally, IT admins can keep track of services connected to the connection servers, such as vCenter Service, Active Directory, and Secure Gateway Service. Having this information helps you understand the health of servers and, if required, turn some of the servers off depending on performance requirements. All the relevant information for connection servers is available at your fingertips.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--258cVHWr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97o0s2l3bauzqdw9ufta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--258cVHWr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97o0s2l3bauzqdw9ufta.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Infrastructure monitoring with Horizon Universal Console&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g8f1F-zQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w16yjts32kk9s0bgycoa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g8f1F-zQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w16yjts32kk9s0bgycoa.png" alt="Image description" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connection server services status&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Unified Access Gateway clusters and appliances monitoring&lt;/strong&gt;&lt;br&gt;
The Horizon Universal Console now empowers IT admins to monitor and manage UAG clusters and individual UAG appliances. The new UAG infrastructure monitoring dashboard gives usage (for example, a user’s sessions data) and UAG health information. You can now monitor the UAG appliance health and health status for the UAG services such as UAG VM CPU, memory utilization, and certificate information. This usage information can help you identify if UAG resources are lying idle and then manage UAG capacity requirements proactively. Please note that to monitor UAG, IT admins need to onboard their UAG onto the Horizon Cloud next-gen platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7ARnFbVs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3gin1u5oksmr7kg1mrch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7ARnFbVs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3gin1u5oksmr7kg1mrch.png" alt="Image description" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;UAG configuration information and status&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Topology data access from Horizon Universal Console&lt;/strong&gt;&lt;br&gt;
Now you can access topology data from your Horizon Universal Console. Through a new dashboard, IT can access information such as site, Horizon edge, pools, pool groups, entitlements, and more. This will help you improve overall user experience with detailed topology information, such as which sites are meeting your performance expectations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Agent monitoring information sent to Splunk for Horizon Plus customers&lt;/strong&gt;&lt;br&gt;
Horizon Cloud next-gen customers who are leveraging the Horizon Standard Plus Subscription or the Horizon Enterprise Plus Subscription licenses can now choose to integrate with Splunk for observability. This will help those customers to consume information such as Horizon logs, metrics, and agent information within Splunk dashboards, which many organizations use for monitoring. For example, any agent errors that are critical and affect users connecting to virtual desktops can be sent to Splunk Enterprise. IT admins can use this information to identify various errors, like agent connectivity issues, and tackle them promptly.&lt;/p&gt;

&lt;p&gt;We’re excited to help IT admins and our own operations team gain more visibility into virtual desktop and app delivery by releasing these latest updates in Horizon Cloud next-gen. To gain a deeper understanding of these updates and more, read our Horizon Cloud next-gen release notes.&lt;/p&gt;

</description>
      <category>vmware</category>
      <category>daas</category>
      <category>horizon</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
