Cloud Native Catalog
Easily import any catalog item into Meshery. Have a design pattern to share? Add yours to the catalog.
Meshery CLI
Import using mesheryctl, visit docs for steps.
1. Apply a design file.
mesheryctl design apply -f [file | URL]
2. Apply a WASM filter file.
mesheryctl filter import [file | URL] --wasm-config [filepath|string]



No results found
AWS cloudfront controller
MESHERY481b
RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
AWS CLOUDFRONT CONTROLLER
Description
This YAML file defines a Kubernetes Deployment for the ack-cloudfront-controller, a component responsible for managing AWS CloudFront resources in a Kubernetes environment. The Deployment specifies that one replica of the pod should be maintained (replicas: 1). Metadata labels are provided for identification and management purposes, such as app.kubernetes.io/name, app.kubernetes.io/instance, and others, to ensure proper categorization and management by Helm. The pod template section within the Deployment spec outlines the desired state of the pods, including the container's configuration. The container, named controller, uses the ack-cloudfront-controller:latest image and will run a binary (./bin/controller) with specific arguments to configure its operation, such as AWS region, endpoint URL, logging level, and resource tags. Environment variables are defined to provide necessary configuration values to the container. The container exposes an HTTP port (8080) and includes liveness and readiness probes to monitor and manage its health, ensuring the application is running properly and is ready to serve traffic.
Read moreCaveats and Considerations
1. Environment Variables: Verify that the environment variables such as AWS_REGION, AWS_ENDPOINT_URL, and ACK_LOG_LEVEL are correctly set according to your AWS environment and logging preferences. Incorrect values could lead to improper functioning or failure of the controller. 2. Secrets Management: If AWS credentials are required, make sure the AWS_SHARED_CREDENTIALS_FILE and AWS_PROFILE environment variables are correctly configured and the referenced Kubernetes secret exists. Missing or misconfigured secrets can prevent the controller from authenticating with AWS. 3. Resource Requests and Limits: Review and adjust the resource requests and limits to match the expected workload and available cluster resources. Insufficient resources can lead to performance issues, while overly generous requests can waste cluster resources. 4. Probes Configuration: The liveness and readiness probes are configured with specific paths and ports. Ensure that these endpoints are correctly implemented in the application. Misconfigured probes can result in the pod being killed or marked as unready.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
AWS cloudfront controller
MESHERY4a9d
RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
AWS CLOUDFRONT CONTROLLER
Description
This YAML file defines a Kubernetes Deployment for the ack-cloudfront-controller, a component responsible for managing AWS CloudFront resources in a Kubernetes environment. The Deployment specifies that one replica of the pod should be maintained (replicas: 1). Metadata labels are provided for identification and management purposes, such as app.kubernetes.io/name, app.kubernetes.io/instance, and others, to ensure proper categorization and management by Helm. The pod template section within the Deployment spec outlines the desired state of the pods, including the container's configuration. The container, named controller, uses the ack-cloudfront-controller:latest image and will run a binary (./bin/controller) with specific arguments to configure its operation, such as AWS region, endpoint URL, logging level, and resource tags. Environment variables are defined to provide necessary configuration values to the container. The container exposes an HTTP port (8080) and includes liveness and readiness probes to monitor and manage its health, ensuring the application is running properly and is ready to serve traffic.
Read moreCaveats and Considerations
1. Environment Variables: Verify that the environment variables such as AWS_REGION, AWS_ENDPOINT_URL, and ACK_LOG_LEVEL are correctly set according to your AWS environment and logging preferences. Incorrect values could lead to improper functioning or failure of the controller. 2. Secrets Management: If AWS credentials are required, make sure the AWS_SHARED_CREDENTIALS_FILE and AWS_PROFILE environment variables are correctly configured and the referenced Kubernetes secret exists. Missing or misconfigured secrets can prevent the controller from authenticating with AWS. 3. Resource Requests and Limits: Review and adjust the resource requests and limits to match the expected workload and available cluster resources. Insufficient resources can lead to performance issues, while overly generous requests can waste cluster resources. 4. Probes Configuration: The liveness and readiness probes are configured with specific paths and ports. Ensure that these endpoints are correctly implemented in the application. Misconfigured probes can result in the pod being killed or marked as unready.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
AWS rds controller

MESHERY46e4

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
AWS RDS CONTROLLER
Description
This YAML manifest defines a Kubernetes Deployment for the ACK RDS Controller application. It orchestrates the deployment of the application within a Kubernetes cluster, ensuring its availability and scalability. The manifest specifies various parameters such as the number of replicas, pod template configurations including container settings, environment variables, resource limits, and security context. Additionally, it includes probes for health checks, node selection preferences, tolerations, and affinity rules for optimal scheduling. The manifest encapsulates the deployment requirements necessary for the ACK RDS Controller application to run effectively in a Kubernetes environment.
Read moreCaveats and Considerations
1. Resource Allocation: Ensure that resource requests and limits are appropriately configured based on the expected workload of the application to avoid resource contention and potential performance issues. 2. Security Configuration: Review the security context settings, including privilege escalation, runAsNonRoot, and capabilities, to enforce security best practices and minimize the risk of unauthorized access or privilege escalation within the container. 3. Probe Configuration: Validate the configuration of liveness and readiness probes to ensure they accurately reflect the health and readiness of the application. Incorrect probe settings can lead to unnecessary pod restarts or deployment issues. 4. Environment Variables: Double-check the environment variables provided to the container, ensuring they are correctly set and necessary for the application's functionality. Incorrect or missing environment variables can cause runtime errors or unexpected behavior. 5. Volume Mounts: Verify the volume mounts defined in the deployment, especially if the application requires access to specific data or configuration files. Incorrect volume configurations can result in data loss or application malfunction.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Accelerated mTLS handshake for Envoy data planes

MESHERY4421

RELATED PATTERNS
Design With Validation Errors
MESHERY49e3
ACCELERATED MTLS HANDSHAKE FOR ENVOY DATA PLANES
Description
Cryptographic operations are among the most compute-intensive and critical operations when it comes to secured connections. Istio uses Envoy as the “gateways/sidecar” to handle secure connections and intercept the traffic. Depending upon use cases, when an ingress gateway must handle a large number of incoming TLS and secured service-to-service connections through sidecar proxies, the load on Envoy increases. The potential performance depends on many factors, such as size of the cpuset on which Envoy is running, incoming traffic patterns, and key size. These factors can impact Envoy serving many new incoming TLS requests. To achieve performance improvements and accelerated handshakes, a new feature was introduced in Envoy 1.20 and Istio 1.14. It can be achieved with 3rd Gen Intel® Xeon® Scalable processors, the Intel® Integrated Performance Primitives (Intel® IPP) crypto library, CryptoMB Private Key Provider Method support in Envoy, and Private Key Provider configuration in Istio using ProxyConfig.
Read moreCaveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource for custom Intel configuration
Technologies
Related Patterns
Design With Validation Errors
MESHERY49e3
Acme Operator

MESHERY4627

RELATED PATTERNS
Hello Kubernetes Tutorial
MESHERY4d00
ACME OPERATOR
Description
Let’s Encrypt uses the ACME protocol to verify that you control a given domain name and to issue you a certificate. To get a Let’s Encrypt certificate, you’ll need to choose a piece of ACME client software to use.
Caveats and Considerations
We recommend that most people start with the Certbot client. It can simply get a cert for you or also help you install, depending on what you prefer. It’s easy to use, works on many operating systems, and has great documentation.
Read moreTechnologies
Related Patterns
Hello Kubernetes Tutorial
MESHERY4d00
All relationships

MESHERY4f4a

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
ALL RELATIONSHIPS
Description
This design incorporates all the key relationships, including the following: 1. Hierarchical-Parent-Inventory: This represents a parent-child relationship between components, where one component is a dependency of another. 2. Hierarchical-Parent-Wallet: In a hierarchical-parent-wallet relationship, one component (the "wallet") serves as a container or host for another component, similar to a parent-child structure. 3. Hierarchical-Sibling-MatchLabels: A Match-Labels Relationship in Meshery refers to the configuration where Kubernetes components are linked based on shared labels. 4. Edge-Binding-Mount: An Edge-Mount Relationship in Meshery represents the assignment of persistent storage to Pods via PersistentVolumeClaims (PVC). 5. Edge-Binding-Permission: The Edge-Binding-Permissions Relationship defines how components connect to establish access control and permissions in a system. In the Edge-Binding-Permissions relationship, the binding components, such as role bindings and cluster role bindings, act as essential links that establish and enforce permissions. 6. Edge-Binding-Firewall: An Edge-Firewall Relationship in Meshery models a Kubernetes Network Policy that controls ingress and egress traffic between Pods. 7. Edge-Non-Binding-Network: An Edge-Network Relationship in Meshery represents the networking configuration between Kubernetes components, typically illustrated by a dashed arrow connecting a Service to a Deployment. 8. Edge-Non-Binding-Annotation: Annotation Relationships refer to a visual representation used to indicate a relationship between two components without assigning any semantic meaning to that relationship.
Read moreCaveats and Considerations
For detailed considerations on each relationship type, refer to the corresponding individual published designs. These designs provide in-depth insights into best practices, configuration strategies, and potential impacts for each type of relationship.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Amazon Web Services IoT Architecture Diagram

MESHERY449f

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
AMAZON WEB SERVICES IOT ARCHITECTURE DIAGRAM
Description
This comprehensive IoT architecture harnesses the power of Amazon Web Services (AWS) to create a robust and scalable Internet of Things (IoT) ecosystem
Caveats and Considerations
It cannot be deployed because the nodes used to create the diagram are shapes and not components.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Apache Airflow

MESHERY41d4

RELATED PATTERNS
Minecraft App
MESHERY48dd
APACHE AIRFLOW
Description
Apache Airflow (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed. Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include Luigi, Oozie and Azkaban. Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's XCom feature). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work. Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches. Principles Dynamic: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. Extensible: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment. Elegant: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful Jinja templating engine. Scalable: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers.
Read moreCaveats and Considerations
Make sure to fill out your own postgres username ,password, host,port etc to see airflow working as per your database requirements. pass them as environment variables or create secrets for password and config map for ports ,host .
Technologies
Related Patterns
Minecraft App
MESHERY48dd
Apache ShardingSphere Operator

MESHERY4803

RELATED PATTERNS
Untitled Design
MESHERY4135
APACHE SHARDINGSPHERE OPERATOR
Description
The ShardingSphere Kubernetes Operator automates provisioning, management, and operations of ShardingSphere Proxy clusters running on Kubernetes. Apache ShardingSphere is an ecosystem to transform any database into a distributed database system, and enhance it with sharding, elastic scaling, encryption features & more.
Read moreCaveats and Considerations
Ensure Apache ShardingSphere and Knative Service is registered as a MeshModel
Technologies
Related Patterns
Untitled Design
MESHERY4135
App-graph

MESHERY4f74

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
APP-GRAPH
Description
Argo and Kubernetes graph app
Caveats and Considerations
No Caveats
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Argo CD w/Dex

MESHERY4c82

RELATED PATTERNS
Untitled Design
MESHERY4135
ARGO CD W/DEX
Description
The Argo CD server component exposes the API and UI. The operator creates a Service to expose this component and can be accessed through the various methods available in Kubernetes.
Caveats and Considerations
Dex can be used to delegate authentication to external identity providers like GitHub, SAML and others. SSO configuration of Argo CD requires updating the Argo CD CR with Dex connector settings.
Technologies
Related Patterns
Untitled Design
MESHERY4135
ArgoCD application controller

MESHERY48a9

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
ARGOCD APPLICATION CONTROLLER
Description
This YAML configuration describes a Kubernetes Deployment for the ArgoCD Application Controller. It includes metadata defining labels for identification purposes. The spec section outlines the deployment's details, including the desired number of replicas and a pod template. Within the pod template, there's a single container named argocd-application-controller, which runs the ArgoCD Application Controller binary. This container is configured with various environment variables sourced from ConfigMaps, defining parameters such as reconciliation timeouts, repository server details, logging settings, and affinity rules. Port 8082 is specified for readiness probes, and volumes are mounted for storing TLS certificates and temporary data. Additionally, the deployment specifies a service account and defines pod affinity rules for scheduling. These settings collectively ensure the reliable operation of the ArgoCD Application Controller within Kubernetes clusters, facilitating efficient management of applications within an ArgoCD instance.
Read moreCaveats and Considerations
1. Environment Configuration: Ensure that the environment variables configured for the application controller align with your deployment requirements. Review and adjust settings such as reconciliation timeouts, logging levels, and repository server details as needed. 2. Resource Requirements: Depending on your deployment environment and workload, adjust resource requests and limits for the container to ensure optimal performance and resource utilization. 3. Security: Pay close attention to security considerations, especially when handling sensitive data such as TLS certificates. Ensure that proper encryption and access controls are in place for any secrets used in the deployment. 4. High Availability: Consider strategies for achieving high availability and fault tolerance for the ArgoCD Application Controller. This may involve running multiple replicas of the controller across different nodes or availability zones. 5. Monitoring and Alerting: Implement robust monitoring and alerting mechanisms to detect and respond to any issues or failures within the ArgoCD Application Controller deployment. Utilize tools such as Prometheus and Grafana to monitor key metrics and set up alerts for critical events.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
ArgoCD-Application [Components added for Network, Storage and Orchestration]

MESHERY41e0

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
ARGOCD-APPLICATION [COMPONENTS ADDED FOR NETWORK, STORAGE AND ORCHESTRATION]
Description
This is design that deploys ArgoCD application that includes Nginx virtual service, Nginx server, K8s pod autoscaler, OpenEBS's Jiva volume, and a sample ArgoCD application listening on 127.0.0.4
Caveats and Considerations
Ensure networking is setup properly
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Argocd metrics

MESHERY48c8

RELATED PATTERNS
Pod Readiness
MESHERY4b83
ARGOCD METRICS
Description
This is a sample argocd design @ and mesasuring Metric@ a *$kubernetes manifes->!
Caveats and Considerations
No caveats
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
Autogenerated

MESHERY4102

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
AUTOGENERATED
Description
This YAML manifest defines a Kubernetes Deployment for the Thanos Operator, named "thanos-operator," with one replica. The deployment's pod template is labeled "app: thanos-operator" and includes security settings to run as a non-root user with specific user (1000) and group (2000) IDs. The main container, also named "thanos-operator," uses the "thanos-io/thanos:latest" image, runs with minimal privileges, and starts with the argument "--log.level=info." It listens on port 8080 for HTTP traffic and has liveness and readiness probes set to check the "/metrics" endpoint. Resource requests and limits are defined for CPU and memory. Additionally, the pod is scheduled on Linux nodes with specific node affinity rules and tolerations for certain node taints, ensuring appropriate node placement and scheduling.
Read moreCaveats and Considerations
1. Security Context: 1.1 The runAsUser: 1000 and fsGroup: 2000 settings are essential for running the container with non-root privileges. Ensure that these user IDs are correctly configured and have the necessary permissions within your environment. 1.2 Dropping all capabilities (drop: - ALL) enhances security but may limit certain functionalities. Verify that the Thanos container does not require any additional capabilities. 2. Image Tag: The image tag is set to "latest," which can introduce instability since it pulls the most recent image version that might not be thoroughly tested. Consider specifying a specific, stable version tag for better control over updates and rollbacks. 3. Resource Requests and Limits: The defined resource requests and limits (memory: "64Mi"/"128Mi", cpu: "250m"/"500m") might need adjustment based on the actual workload and performance characteristics of the Thanos Operator in your environment. Monitor resource usage and tweak these settings accordingly to prevent resource starvation or over-provisioning.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Autoscaling based on Metrics in GKE

MESHERY400b

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
AUTOSCALING BASED ON METRICS IN GKE
Description
This design demonstrates how to automatically scale your Google Kubernetes Engine (GKE) workloads based on Prometheus-style metrics emitted by your application. It uses the [GKE workload metrics](https://cloud.google.com/stackdriver/docs/solutions/gke/managing-metrics#workload-metrics) pipeline to collect the metrics emitted from the example application and send them to [Cloud Monitoring](https://cloud.google.com/monitoring), and then uses the [HorizontalPodAutoscaler](https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler) along with the [Custom Metrics Adapter](https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/custom-metrics-stackdriver-adapter) to scale the application.
Read moreCaveats and Considerations
Add your own custom prometheus to GKE for better scaling of workloads
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
Aws Cloudwatch Agent for Cluster Metrics

MESHERY4b82

RELATED PATTERNS
Pod Readiness
MESHERY4b83
AWS CLOUDWATCH AGENT FOR CLUSTER METRICS
Description
CloudWatch Agent to collect Kubernetes cluster metrics involves configuring and deploying the CloudWatch Agent within your Kubernetes environment. This agent facilitates the collection and forwarding of various system-level and application-level metrics to AWS CloudWatch, enabling comprehensive monitoring and analysis. By integrating the CloudWatch Agent, Kubernetes administrators can effortlessly monitor cluster performance metrics such as CPU utilization, memory usage, disk I/O, and network traffic. This setup enhances operational visibility, supports proactive capacity planning, and enables the creation of alarms and notifications based on customizable thresholds, ensuring robust and reliable management of Kubernetes infrastructure at scale.
Read moreCaveats and Considerations
When deploying the CloudWatch Agent to collect Kubernetes cluster metrics, there are several caveats and considerations to keep in mind: Resource Consumption: The CloudWatch Agent runs as a daemon set within Kubernetes, consuming resources (CPU, memory) on each node where it's deployed. Ensure your cluster has sufficient resources to accommodate this additional workload. Networking: Verify that nodes in your Kubernetes cluster can communicate with AWS CloudWatch endpoints over the network. This may involve configuring network policies, security groups, or VPC settings to allow outbound traffic to AWS services. Permissions: Set up IAM roles or IAM users with appropriate permissions to allow the CloudWatch Agent to publish metrics to CloudWatch. Follow the principle of least privilege to minimize security risks. Configuration: Properly configure the CloudWatch Agent to collect relevant metrics based on your application and infrastructure requirements. Incorrect configuration can lead to incomplete or inaccurate monitoring data. Version Compatibility: Ensure compatibility between the CloudWatch Agent version and your Kubernetes cluster version. Updates or changes in Kubernetes versions may require corresponding updates to the CloudWatch Agent for optimal performance and compatibility. Monitoring Costs: Regularly monitor and review the costs associated with CloudWatch metrics ingestion and storage. Depending on the volume of metrics collected, costs can vary, especially if high-resolution metrics are enabled. High Availability: Design your deployment for high availability to ensure continuous monitoring and metric collection. Consider deploying multiple instances of the CloudWatch Agent across different availability zones or regions for resilience. Security: Implement best practices for securing the CloudWatch Agent deployment, including encrypting sensitive data in transit and at rest, using secure IAM roles, and regularly updating to the latest agent version to mitigate security vulnerabilities. Integration with Monitoring Tools: Integrate CloudWatch metrics with your existing monitoring and alerting tools to streamline incident response and operational workflows. Ensure that metrics from CloudWatch can be correlated with other monitoring data for comprehensive visibility.
Read moreTechnologies
Related Patterns
Pod Readiness
MESHERY4b83
Azure Monitor Containers

MESHERY4804

RELATED PATTERNS
Pod Readiness
MESHERY4b83
AZURE MONITOR CONTAINERS
Description
Azure Monitor Containers is a service that monitors and provides insights into the performance and health of Azure container apps and Kubernetes clusters: - Container insights Collects and analyzes container logs and metric data from Azure Kubernetes clusters. It supports Azure Kubernetes Service (AKS), Azure Arc-enabled Kubernetes clusters, and Windows Server 2022. - Metric data collection Regularly collects metric data from container apps to help users understand their performance and health. Users can visualize the data in the Azure portal's metrics explorer. - Hybrid Kubernetes monitoring Supports monitoring Kubernetes clusters on-premises and on Azure Stack with AKS Engine. Users can install the container agent with their AKS workloads to create alerts and gain insights into the performance of their on-premises workloads. - Log streaming Allows users to view streaming system and console logs from a container in near real-time.
Read moreCaveats and Considerations
Container insights is a feature of Azure Monitor that collects and analyzes container logs from Azure Kubernetes clusters or Azure Arc-enabled Kubernetes clusters and their components. You can analyze the collected data for the different components in your cluster with a collection of views and prebuilt workbooks.
Read moreTechnologies
Related Patterns
Pod Readiness
MESHERY4b83
Azure-monitor-containers

MESHERY4ff6

RELATED PATTERNS
Pod Readiness
MESHERY4b83
AZURE-MONITOR-CONTAINERS
Description
Azure Monitor managed service for Prometheus and Container insights work together for complete monitoring of your Kubernetes environment. This article describes both features and the data they collect. Azure Monitor managed service for Prometheus is a fully managed service based on the Prometheus project from the Cloud Native Computing Foundation. It allows you to collect and analyze metrics from your Kubernetes cluster at scale and analyze them using prebuilt dashboards in Grafana. Container insights is a feature of Azure Monitor that collects and analyzes container logs from Azure Kubernetes clusters or Azure Arc-enabled Kubernetes clusters and their components. You can analyze the collected data for the different components in your cluster with a collection of views and prebuilt workbooks.
Read moreCaveats and Considerations
Container insights collects metric data from your cluster in addition to logs. This functionality has been replaced by Azure Monitor managed service for Prometheus. You can analyze that data using built-in dashboards in Managed Grafana and alert on them using prebuilt Prometheus alert rules. You can continue to have Container insights collect metric data so you can use the Container insights monitoring experience. Or you can save cost by disabling this collection and using Grafana for metric analysis. See Configure data collection in Container insights using data collection rule for configuration options. For more information checkout this doc https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-overview
Read moreTechnologies
Related Patterns
Pod Readiness
MESHERY4b83
Bank of Anthos

MESHERY48be

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
BANK OF ANTHOS
Description
Bank of Anthos is a sample HTTP-based web app that simulates a bank's payment processing network, allowing users to create artificial bank accounts and complete transactions.
Caveats and Considerations
Ensure enough resources are available on the cluster.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
BookInfo App w/o Kubernetes

MESHERY47b4

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
BOOKINFO APP W/O KUBERNETES
Description
The Bookinfo application is a collection of microservices that work together to display information about a book. The main microservice is called productpage, which fetches data from the details and reviews microservices to populate the book's page. The details microservice contains specific information about the book, such as its ISBN and number of pages. The reviews microservice contains reviews of the book and also makes use of the ratings microservice to retrieve ranking information for each review. The reviews microservice has three different versions: v1, v2, and v3. In v1, the microservice does not interact with the ratings service. In v2, it calls the ratings service and displays the rating using black stars, ranging from 1 to 5. In v3, it also calls the ratings service but displays the rating using red stars, again ranging from 1 to 5. These different versions allow for flexibility and experimentation with different ways of presenting the books ratings to users.
Read moreCaveats and Considerations
Users need to ensure that their cluster is properly configured with Istio, including the installation of the necessary components and enabling sidecar injection for the microservices. Ensure that Meshery Adapter for Istio service mesh is installed properly for easy installation/registration of Istio's MeshModels with Meshery Server. Another consideration is the resource requirements of the application. The Bookinfo application consists of multiple microservices, each running as a separate container. Users should carefully assess the resource needs of the application and ensure that their cluster has sufficient capacity to handle the workload. This includes considering factors such as CPU, memory, and network bandwidth requirements.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Brainstorming Template

MESHERY473d

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
BRAINSTORMING TEMPLATE
Description
Use this template to structure brainstorming sessions effectively for various project types, including product launches and team collaboration sessions.
Caveats and Considerations
This template is designed as a flexible starting point for brainstorming sessions. It’s adaptable for various goals, such as ideation, planning, and team discussions. For best results, use Kanvas’s collaborative tools like sticky notes, color coding, and comments to enrich discussions and enhance readability. Adjust sections as needed based on the scope of your brainstorming session.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Browerless Chrome

MESHERY4c4b
BROWERLESS CHROME
Description
Chrome as a service container. Bring your own hardware or cloud. Homepage: https://www.browserless.io ## Configuration Browserless can be configured via environment variables: ```yaml env: PREBOOT_CHROME: "true" ```
Read moreCaveats and Considerations
Check out the [official documentation](https://docs.browserless.io/docs/docker.html) for the available options. ## Values | Key | Type | Default | Description | |-----|------|---------|-------------| | replicaCount | int | `1` | Number of replicas (pods) to launch. | | image.repository | string | `"browserless/chrome"` | Name of the image repository to pull the container image from. | | image.pullPolicy | string | `"IfNotPresent"` | [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) for updating already existing images on a node. | | image.tag | string | `""` | Image tag override for the default value (chart appVersion). | | imagePullSecrets | list | `[]` | Reference to one or more secrets to be used when [pulling images](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) (from private registries). | | nameOverride | string | `""` | A name in place of the chart name for `app:` labels. | | fullnameOverride | string | `""` | A name to substitute for the full names of resources. | | volumes | list | `[]` | Additional storage [volumes](https://kubernetes.io/docs/concepts/storage/volumes/). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#volumes-1) for details. | | volumeMounts | list | `[]` | Additional [volume mounts](https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#volumes-1) for details. | | envFrom | list | `[]` | Additional environment variables mounted from [secrets](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables) or [config maps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables) for details. | | env | object | `{}` | Additional environment variables passed directly to containers. See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables) for details. | | serviceAccount.create | bool | `true` | Enable service account creation. | | serviceAccount.annotations | object | `{}` | Annotations to be added to the service account. | | serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template. | | podAnnotations | object | `{}` | Annotations to be added to pods. | | podSecurityContext | object | `{}` | Pod [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) for details. | | securityContext | object | `{}` | Container [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1) for details. | | service.annotations | object | `{}` | Annotations to be added to the service. | | service.type | string | `"ClusterIP"` | Kubernetes [service type](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types). | | service.loadBalancerIP | string | `nil` | Only applies when the service type is LoadBalancer. Load balancer will get created with the IP specified in this field. | | service.loadBalancerSourceRanges | list | `[]` | If specified (and supported by the cloud provider), traffic through the load balancer will be restricted to the specified client IPs. Valid values are IP CIDR blocks. | | service.port | int | `80` | Service port. | | service.nodePort | int | `nil` | Service node port (when applicable). | | service.externalTrafficPolicy | string | `nil` | Route external traffic to node-local or cluster-wide endoints. Useful for [preserving the client source IP](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip). | | resources | object | No requests or limits. | Container resource [requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources) for details. | | autoscaling | object | Disabled by default. | Autoscaling configuration (see [values.yaml](values.yaml) for details). | | nodeSelector | object | `{}` | [Node selector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) configuration. | | tolerations | list | `[]` | [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for node taints. See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) for details. | | affinity | object | `{}` | [Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) configuration. See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) for details. |
Read moreTechnologies
Busybox (single)

MESHERY4c98

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
BUSYBOX (SINGLE)
Description
This design deploys simple busybox app inside Layer5-test namespace
Caveats and Considerations
None
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Busybox (single) (fresh)

MESHERY4db7

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
BUSYBOX (SINGLE) (FRESH)
Description
This design deploys simple busybox app inside Layer5-test namespace
Caveats and Considerations
None
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Catalog Design2

MESHERY41bb

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
CATALOG DESIGN2
Description
Design contains k8s resources like deployment, service
Caveats and Considerations
There is no caveats
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Cloud native pizza store

MESHERY43eb

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
CLOUD NATIVE PIZZA STORE
Description
The Pizza Store application simulates placing a Pizza Order that is going to be processed by different services. The application is composed by the Pizza Store Service which serve as the front end and backend to place the order. The order is sent to the Kitchen Service for preparation and once the order is ready to be delivered the Delivery Service takes the order to your door. As any other application, these services will need to store and read data from a persistent store such as a Database and exchange messages if a more event-driven approach is needed. This application uses PostgreSQL and Kafka, as they are well-known components among developers. As you can see in the diagram, if we want to connect to PostgreSQL from the Pizza Store Service we need to add to our applications the PostgreSQL driver that must match with the PostgreSQL instance version that we have available. A Kafka client is required in all the services that are interested in publishing or consuming messages/events. Because you have Drivers and Clients that are sensitive to the available versions on the infrastructure components, the lifecycle of the application is now bound to the lifecycle of these components. Adding Dapr to the picture not only breaks these dependencies, but also remove responsabilities from developers of choosing the right Driver/Client and how these need to be configured for the application to work correctly. Dapr provides developers building block APIs such as the StateStore and PubSub API that developer can use without know the details of which infrastructure is going to be connected under the covers.
Read moreCaveats and Considerations
The application services are written using Java + Spring Boot. These services use the Dapr Java SDK to interact with the Dapr PubSub and Statestore APIs. To run the services locally you can use the Testcontainer integration already included in the projects. For example you can start a local version of the pizza-store service by running the following command inside the pizza-store/ directory (this requires having Java and Maven installed locally): for Caveats And Consideration refer this github repo https://github.com/salaboy/pizza?tab=readme-ov-file#installation
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Configure a Pod to Use a ConfigMap

MESHERY49cb

RELATED PATTERNS
Untitled Design
MESHERY4135
CONFIGURE A POD TO USE A CONFIGMAP
Description
Many applications rely on configuration which is used during either application initialization or runtime. Most times, there is a requirement to adjust values assigned to configuration parameters. ConfigMaps are a Kubernetes mechanism that let you inject configuration data into application pods. The ConfigMap concept allow you to decouple configuration artifacts from image content to keep containerized applications portable. For example, you can download and run the same container image to spin up containers for the purposes of local development, system test, or running a live end-user workload. This design provides a usage example demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps.
Read moreCaveats and Considerations
In essence, this configuration creates a Deployment that: Runs 3 replicas of a pod. Each pod uses the alpine:3 image. Inside each pod, a script continuously prints the current date and a message with a preferred sport fetched from a ConfigMap named "sport". The ConfigMap provides the "sport" data that's mounted into the container's file system. This example demonstrates how to use ConfigMaps to inject configuration data into your pods, allowing you to decouple configuration from your application's image.
Read moreTechnologies
Related Patterns
Untitled Design
MESHERY4135
Consul on kubernetes

MESHERY429d

RELATED PATTERNS
Minecraft App
MESHERY48dd
CONSUL ON KUBERNETES
Description
Consul is a tool for discovering, configuring, and managing services in distributed systems. It provides features like service discovery, health checking, key-value storage, and distributed coordination. In Kubernetes, Consul can be useful in several ways: 1. Service Discovery: Kubernetes already has built-in service discovery through DNS and environment variables. However, Consul provides more advanced features such as service registration, DNS-based service discovery, and health checking. This can be particularly useful if you have services deployed both within and outside of Kubernetes, as Consul can provide a unified service discovery mechanism across your entire infrastructure. 2. Configuration Management: Consul includes a key-value store that can be used to store configuration data. This can be used to configure applications dynamically at runtime, allowing for more flexible and dynamic deployments. 3. Health Checking Consul can perform health checks on services to ensure they are functioning correctly. If a service fails its health check, Consul can automatically remove it from the pool of available instances, preventing traffic from being routed to it until it recovers. 4. Service Mesh: Consul can also be used as a service mesh in Kubernetes, providing features like traffic splitting, encryption, and observability. This can help you to manage communication between services within your Kubernetes cluster more effectively. Overall, Consul can complement Kubernetes by providing additional features and capabilities for managing services in distributed systems. It can help to simplify and streamline the management of complex microservices architectures, providing greater visibility, resilience, and flexibility.
Read moreCaveats and Considerations
customize the design according to your requirements and the image is pulled from docker hub
Technologies
Related Patterns
Minecraft App
MESHERY48dd
CryptoMB-TLS-handshake-acceleration-for-Istio

MESHERY4f96

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
CRYPTOMB-TLS-HANDSHAKE-ACCELERATION-FOR-ISTIO
Description
Depending upon use cases, when an ingress gateway must handle a large number of incoming TLS and secured service-to-service connections through sidecar proxies, the load on Envoy increases. The potential performance depends on many factors, such as size of the cpuset on which Envoy is running, incoming traffic patterns, and key size. These factors can impact Envoy serving many new incoming TLS requests. To achieve performance improvements and accelerated handshakes, a new feature was introduced in Envoy 1.20 and Istio 1.14. It can be achieved with 3rd Gen Intel® Xeon® Scalable processors, the Intel® Integrated Performance Primitives (Intel® IPP) crypto library, CryptoMB Private Key Provider Method support in Envoy, and Private Key Provider configuration in Istio using ProxyConfig.\\\\\\\\\\\\\\
\\\\\\\\\\\\\\
Envoy uses BoringSSL as the default TLS library. BoringSSL supports setting private key methods for offloading asynchronous private key operations, and Envoy implements a private key provider framework to allow creation of Envoy extensions which handle TLS handshakes private key operations (signing and decryption) using the BoringSSL hooks.\\\\\\\\\\\\\\
\\\\\\\\\\\\\\
CryptoMB private key provider is an Envoy extension which handles BoringSSL TLS RSA operations using Intel AVX-512 multi-buffer acceleration. When a new handshake happens, BoringSSL invokes the private key provider to request the cryptographic operation, and then the control returns to Envoy. The RSA requests are gathered in a buffer. When the buffer is full or the timer expires, the private key provider invokes Intel AVX-512 processing of the buffer. When processing is done, Envoy is notified that the cryptographic operation is done and that it may continue with the handshakes.
Caveats and Considerations
None
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
CryptoMB-TLS-handshake-acceleration-for-Istio

MESHERY42b7

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
CRYPTOMB-TLS-HANDSHAKE-ACCELERATION-FOR-ISTIO
Description
Envoy uses BoringSSL as the default TLS library. BoringSSL supports setting private key methods for offloading asynchronous private key operations, and Envoy implements a private key provider framework to allow creation of Envoy extensions which handle TLS handshakes private key operations (signing and decryption) using the BoringSSL hooks.\\
\\
CryptoMB private key provider is an Envoy extension which handles BoringSSL TLS RSA operations using Intel AVX-512 multi-buffer acceleration. When a new handshake happens, BoringSSL invokes the private key provider to request the cryptographic operation, and then the control returns to Envoy. The RSA requests are gathered in a buffer. When the buffer is full or the timer expires, the private key provider invokes Intel AVX-512 processing of the buffer. When processing is done, Envoy is notified that the cryptographic operation is done and that it may continue with the handshakes.\\
Envoy uses BoringSSL as the default TLS library. BoringSSL supports setting private key methods for offloading asynchronous private key operations, and Envoy implements a private key provider framework to allow creation of Envoy extensions which handle TLS handshakes private key operations (signing and decryption) using the BoringSSL hooks.\\
\\
CryptoMB private key provider is an Envoy extension which handles BoringSSL TLS RSA operations using Intel AVX-512 multi-buffer acceleration. When a new handshake happens, BoringSSL invokes the private key provider to request the cryptographic operation, and then the control returns to Envoy. The RSA requests are gathered in a buffer. When the buffer is full or the timer expires, the private key provider invokes Intel AVX-512 processing of the buffer. When processing is done, Envoy is notified that the cryptographic operation is done and that it may continue with the handshakes.\\
\\
\\
Caveats and Considerations
None
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
CryptoMB.yml

MESHERY4c1f

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
CRYPTOMB.YML
Description
Cryptographic operations are among the most compute-intensive and critical operations when it comes to secured connections. Istio uses Envoy as the “gateways/sidecar” to handle secure connections and intercept the traffic. Depending upon use cases, when an ingress gateway must handle a large number of incoming TLS and secured service-to-service connections through sidecar proxies, the load on Envoy increases. The potential performance depends on many factors, such as size of the cpuset on which Envoy is running, incoming traffic patterns, and key size. These factors can impact Envoy serving many new incoming TLS requests. To achieve performance improvements and accelerated handshakes, a new feature was introduced in Envoy 1.20 and Istio 1.14. It can be achieved with 3rd Gen Intel® Xeon® Scalable processors, the Intel® Integrated Performance Primitives (Intel® IPP) crypto library, CryptoMB Private Key Provider Method support in Envoy, and Private Key Provider configuration in Istio using ProxyConfig.
Read moreCaveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource for custom Intel configuration
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
CryptoMB.yml

MESHERY441b

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
CRYPTOMB.YML
Description
Cryptographic operations are among the most compute-intensive and critical operations when it comes to secured connections. Istio uses Envoy as the “gateways/sidecar” to handle secure connections and intercept the traffic. Depending upon use cases, when an ingress gateway must handle a large number of incoming TLS and secured service-to-service connections through sidecar proxies, the load on Envoy increases. The potential performance depends on many factors, such as size of the cpuset on which Envoy is running, incoming traffic patterns, and key size. These factors can impact Envoy serving many new incoming TLS requests. To achieve performance improvements and accelerated handshakes, a new feature was introduced in Envoy 1.20 and Istio 1.14. It can be achieved with 3rd Gen Intel® Xeon® Scalable processors, the Intel® Integrated Performance Primitives (Intel® IPP) crypto library, CryptoMB Private Key Provider Method support in Envoy, and Private Key Provider configuration in Istio using ProxyConfig.
Read moreCaveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource for custom Intel configuration
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Dapr

MESHERY45f7

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
DAPR
Description
A standard Dapr control plane design.
Caveats and Considerations
none
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Dapr OAuth Authorization to External Service

MESHERY4ce9
DAPR OAUTH AUTHORIZATION TO EXTERNAL SERVICE
Description
This design walks you through the steps of setting up the OAuth middleware to enable a service to interact with external services requiring authentication. This design seperates the authentication/authorization concerns from the application. checkout this https://github.com/dapr/samples/tree/master/middleware-oauth-microsoftazure for more inoformation and try out in your own environment.
Read moreCaveats and Considerations
Certainly! Here's how you would replace the placeholders with actual values and apply the configuration to your Kubernetes cluster: 1. Replace `"YOUR_APPLICATION_ID"`, `"YOUR_CLIENT_SECRET"`, and `"YOUR_TENANT_ID"` with your actual values in the `msgraphsp` component metadata: ```yaml metadata: # OAuth2 ClientID, for Microsoft Identity Platform it is the AAD Application ID - name: clientId value: "your_actual_application_id" # OAuth2 Client Secret - name: clientSecret value: "your_actual_client_secret" # Application Scope for Microsoft Graph API (vs. User Scope) - name: scopes value: "https://graph.microsoft.com/.default" # Token URL for Microsoft Identity Platform, TenantID is the Tenant (also sometimes called Directory) ID of the AAD - name: tokenURL value: "https://login.microsoftonline.com/your_actual_tenant_id/oauth2/v2.0/token" ``` 2. Apply the modified YAML configuration to your Kubernetes cluster using `kubectl apply -f your_file.yaml`. Ensure you've replaced `"your_actual_application_id"`, `"your_actual_client_secret"`, and `"your_actual_tenant_id"` with the appropriate values corresponding to your Microsoft Graph application and Azure Active Directory configuration before applying the configuration to your cluster.
Read moreTechnologies
Dapr with Kubernetes events

MESHERY4215

RELATED PATTERNS
Pod Readiness
MESHERY4b83
DAPR WITH KUBERNETES EVENTS
Description
This design will show an example of running Dapr with a Kubernetes events input binding. You'll be deploying the Node application and will require a component definition with a Kubernetes event binding component. checkout this https://github.com/dapr/samples/tree/master/read-kubernetes-events#read-kubernetes-events for more info .
Read moreCaveats and Considerations
make sure to replace some things like docker images ,credentials to try out on your local cluster .
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
Delay Action for Chaos Mesh

MESHERY4dcc

RELATED PATTERNS
Untitled Design
MESHERY4135
DELAY ACTION FOR CHAOS MESH
Description
A simple example
Caveats and Considerations
An example the delay action
Technologies
Related Patterns
Untitled Design
MESHERY4135
Deployment Web

MESHERY477c

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
DEPLOYMENT WEB
Description
Simple deployment of web application
Caveats and Considerations
There is no caveats
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Design With Validation Errors

MESHERY49e3
DESIGN WITH VALIDATION ERRORS
Description
Design with proper validation
Caveats and Considerations
Nothing specific
Technologies
Distributed Database w/ Shardingshpere

MESHERY4ba3

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
DISTRIBUTED DATABASE W/ SHARDINGSHPERE
Description
The Distributed Database with ShardingSphere design outlines a robust architecture for deploying and managing a distributed database system. ShardingSphere facilitates horizontal partitioning of data across multiple nodes, known as shards, to enhance scalability, performance, and fault tolerance. This design details the configuration of ShardingSphere to manage shard key distribution, data sharding strategies, and query routing mechanisms efficiently. It emphasizes optimizing data access and synchronization while ensuring high availability and data integrity. Key considerations include defining shard key strategies for balanced data distribution, implementing robust monitoring and alerting systems, and integrating backup and recovery solutions for comprehensive data management. This design is ideal for applications requiring scalable and resilient database solutions in distributed computing environments.
Read moreCaveats and Considerations
While deploying the Distributed Database with ShardingSphere design, several caveats should be considered to ensure optimal performance and reliability. First, configuring and maintaining the sharding strategy requires careful planning to avoid uneven data distribution among shards, which can lead to performance bottlenecks. Additionally, managing shard synchronization and ensuring data consistency across distributed nodes can be complex and requires robust monitoring and management tools. Security concerns such as data encryption, access control, and securing inter-node communication must also be addressed to prevent unauthorized access and data breaches. Furthermore, the operational overhead of maintaining multiple database instances and coordinating schema changes across shards can be significant and should be managed effectively to minimize downtime and operational risks.
Read moreTechnologies
Related Patterns
Pod Resource Request
MESHERY4a23
EC2-CONTROLLER

MESHERY40b1

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EC2-CONTROLLER
Description
This design provides everything needed to set up the ACK (AWS Controllers for Kubernetes) EC2 controller in your Kubernetes cluster, including CRDs, permissions, and pod configuration for managing EC2 resources directly from your cluster.
Caveats and Considerations
1. Kubernetes Cluster Connection: Ensure you have your cluster connected to Meshery. 2. Set up a Secret: Base64 encode your AWS access key and secret access key, and store them in the Kubernetes Secret. 3. Environment Variables: The design is pre-configured to use the access keys from the Secret as environment variables. Simply provide your encoded keys in the secret. 4. AWS Region: Specify the correct AWS region in the controller pod configuration. This design uses us-east-1. 5. Namespace Deployment: Deploy all resources within a dedicated namespace. This design uses the ack-system namespace.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
EC2-INSTANCES

MESHERY419d

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EC2-INSTANCES
Description
This design has two EC2 instances, a Bastion host that will be deployed in a public Subnet and a private Instance that will be deployed in a private subnet.
Caveats and Considerations
1. Before deploying this design, ensure that the EC2 Controller design is deployed first, followed by the VPC Workflow design. 2; You have to Configure the Security Group ID and Subnet ID for both components.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
ELK stack

MESHERY4b16

RELATED PATTERNS
Pod Readiness
MESHERY4b83
ELK STACK
Description
ELK stack in kubernetes deployed with simple python app using logstash ,kibana , filebeat ,elastic search.
Caveats and Considerations
here technologies included are kubernetes , elastic search ,log stash ,log stash ,kibana ,python etc
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
Edge Permission Relationship

MESHERY4ce5

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EDGE PERMISSION RELATIONSHIP
Description
A relationship that binds permission between components. Eg: ClusterRole defines a set of permissions, ClusterRoleBinding binds those permissions to subjects like service accounts.
Caveats and Considerations
NA
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Edge-Annotation-relationship

MESHERY48f2

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EDGE-ANNOTATION-RELATIONSHIP
Description
Annotation Relationships refer to a visual representation used to indicate a relationship between two components without assigning any semantic meaning to that relationship. In this context, the relationship is depicted simply using an arrow to connect the components, signifying that they are related in some way, but not specifying the nature or significance of that relationship.
Read moreCaveats and Considerations
1. Ensure that the use of annotations clearly conveys the intended relationships between components. While annotations don’t have semantic meaning, their placement and direction should help viewers understand the context of the connection. 2. Consider providing accompanying documentation or a legend that explains the purpose of the annotations. This helps viewers understand why certain components are annotated and what the relationships represent, even if they lack specific semantics.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Edge-Binding-Permissions-Relationship

MESHERY426e

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EDGE-BINDING-PERMISSIONS-RELATIONSHIP
Description
The Edge-Binding-Permissions Relationship defines how components connect to establish access control and permissions in a system. In the Edge-Binding-Permissions relationship, the binding components, such as role bindings and cluster role bindings, act as essential links that establish and enforce permissions. They connect service accounts to roles or cluster roles, determining what actions the service accounts are allowed to perform.
Read moreCaveats and Considerations
1. Clearly define the roles and their associated permissions before creating bindings. Understand what actions the service accounts will need to perform and ensure that roles are designed to grant only the necessary permissions to follow the principle of least privilege. 2. Plan for how role bindings and cluster role bindings will be managed over time. Consider the implications of adding or removing bindings, especially in dynamic environments where service accounts may change frequently. Ensure that you have processes in place for reviewing and updating permissions as needed.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Edge-Firewall-relationship

MESHERY4499

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EDGE-FIREWALL-RELATIONSHIP
Description
An Edge Firewall Relationship in Meshery models a Kubernetes Network Policy that controls ingress and egress traffic between Pods. This relationship defines rules for Pod-to-Pod communication, specifying allowed and blocked traffic paths. By enforcing network policies, it secures inter-Pod communication and enhances overall cluster security.
Read moreCaveats and Considerations
1. Ensure Pods are properly labeled, as Kubernetes Network Policies rely on labels to apply rules. Inconsistent or missing labels can cause the policy to behave unpredictably or fail to enforce the intended traffic rules. 2. Ensure that the correct ports and protocols are specified in the network policy. Misconfigured ports or protocols can result in unintended traffic blocking, preventing necessary communication between Pods or services. 3. Be mindful of how multiple network policies are configured within the same namespace. Overlapping or conflicting rules across policies may cause unexpected traffic behavior, so ensure that policies are clearly defined and ordered to avoid conflicts.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Edge-Network-Relationship: Service to Deployment

MESHERY4e80

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EDGE-NETWORK-RELATIONSHIP: SERVICE TO DEPLOYMENT
Description
An Edge-Network Relationship in Meshery represents the networking configuration between Kubernetes components, typically illustrated by a dashed arrow connecting a Service to a Deployment. This dashed arrow signifies that the Service is linked to the Pods in the Deployment, exposing network access to them through a specified port. The accompanying port/network protocol notation indicates the port exposed by the Service and the corresponding protocol, such as TCP.
Read moreCaveats and Considerations
1. Use consistent and accurate labels in both the Service and Deployment configurations. This consistency is crucial for the Service to correctly route traffic to the intended Pods, ensuring proper communication within the edge-network relationship. 2. Clearly specify the network protocol (e.g., TCP or UDP) in the Service configuration. Ensure that the chosen protocol aligns with the requirements of the application and the behavior of the underlying Pods. Mismatched protocols can lead to unexpected communication failures. 3. Carefully manage the exposed ports for Services to avoid conflicts and ensure proper communication. Be sure to match the target port in the Service configuration to the container port in the Pods to ensure that traffic is routed correctly. This alignment is essential for the application to function properly, as mismatched ports can lead to connectivity issues.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Edge-mount-relationship

MESHERY4b1a

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EDGE-MOUNT-RELATIONSHIP
Description
An Edge-Mount Relationship in Meshery represents the assignment of persistent storage to Pods via PersistentVolumeClaims (PVC). This relationship models how Pods claim storage from PersistentVolumes (PV) for data persistence, ensuring that workloads have access to the required storage resources.
Read moreCaveats and Considerations
1. Ensure the correct StorageClass is configured for your PersistentVolume. 2. Verify that the requested storage size in the PersistentVolumeClaim matches the actual storage needs of your application.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
ElasticSearch

MESHERY4654

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
ELASTICSEARCH
Description
Kubernetes makes it trivial for anyone to easily build and scale Elasticsearch clusters. Here, you'll find how to do so. Current Elasticsearch version is 5.6.2.
Caveats and Considerations
Elasticsearch for Kubernetes: Current pod descriptors use an emptyDir for storing data in each data node container. This is meant to be for the sake of simplicity and should be adapted according to your storage needs.
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
Envoy using BoringSSL

MESHERY447c
ENVOY USING BORINGSSL
Description
Envoy uses BoringSSL as the default TLS library. BoringSSL supports setting private key methods for offloading asynchronous private key operations, and Envoy implements a private key provider framework to allow creation of Envoy extensions which handle TLS handshakes private key operations (signing and decryption) using the BoringSSL hooks.
CryptoMB private key provider is an Envoy extension which handles BoringSSL TLS RSA operations using Intel AVX-512 multi-buffer acceleration. When a new handshake happens, BoringSSL invokes the private key provider to request the cryptographic operation, and then the control returns to Envoy. The RSA requests are gathered in a buffer. When the buffer is full or the timer expires, the private key provider invokes Intel AVX-512 processing of the buffer. When processing is done, Envoy is notified that the cryptographic operation is done and that it may continue with the handshakes.
Envoy uses BoringSSL as the default TLS library. BoringSSL supports setting private key methods for offloading asynchronous private key operations, and Envoy implements a private key provider framework to allow creation of Envoy extensions which handle TLS handshakes private key operations (signing and decryption) using the BoringSSL hooks.
CryptoMB private key provider is an Envoy extension which handles BoringSSL TLS RSA operations using Intel AVX-512 multi-buffer acceleration. When a new handshake happens, BoringSSL invokes the private key provider to request the cryptographic operation, and then the control returns to Envoy. The RSA requests are gathered in a buffer. When the buffer is full or the timer expires, the private key provider invokes Intel AVX-512 processing of the buffer. When processing is done, Envoy is notified that the cryptographic operation is done and that it may continue with the handshakes.
Caveats and Considerations
test
Technologies
Example Edge-Firewall Relationship

MESHERY490f

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EXAMPLE EDGE-FIREWALL RELATIONSHIP
Description
A relationship that act as a firewall for ingress and egress traffic from Pods.
Caveats and Considerations
NA
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Example Edge-Network Relationship

MESHERY4ee9

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EXAMPLE EDGE-NETWORK RELATIONSHIP
Description
The design showcases the operational dynamics of the Edge-Network Relationship. There are two ways you can use this design in your architecture design. 1. Cloning this design by clicking the clone button. 2. Start from scratch by creating an edge-network relationship on your own. How to create an Edge-Network relationship on your own? 1. Navigate to MeshMap. 2. Click on the Kubernetes icon inside the dock it will open a Kubernetes drawer from where you can select any component that Kubernetes supports. 3. Search for the Ingress and Service component from the search bar provided in the drawer. 4. Drag-n-drop both the components on the canvas. 5. Hover over the Ingress component, Some handlebars will show up on four sides of the component. 6. Move the cursor close to either of the handlebars, an arrow will show up, click on that arrow. This will open up two options: 1. Question mark: Opens the Help Center 2. Arrow (Edge handle): This edge handle is used for creating the edge relationship 7. Click on the Edge handle and move your cursor close to the Service component. An edge will appear going from the Ingress to Service component which represents the edge relationship between the two components. 8. Congratulations! You just created a relationship between Ingress and Service.
Read moreCaveats and Considerations
No Caveats or Considerations
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Example Edge-Permission Relationship

MESHERY4f9f

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EXAMPLE EDGE-PERMISSION RELATIONSHIP
Description
The design showcases the operational dynamics of the Edge-Permission relationship. To engage with its functionality, adhere to the sequential steps below: 1. Duplicate this design by cloning it. 2. Modify the name of the service account. Upon completion, you'll notice that the connection visually represented by the edge vanishes, and the ClusterRoleBinding (CRB) is disassociated from both the ClusterRole (CR) and Service Account (SA). To restore this relationship, you can either, 1. Drag the CRB from the CR to the SA, then release the mouse click. This action triggers the recreation of the relationship, as the relationship constraints get satisfied. 2. Or, revert the name of the SA. This automatically recreates the relationship, as the relationship constraints get satisfied. These are a few of the ways to experience this relationship.
Read moreCaveats and Considerations
NA
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Example Labels and Annotations

MESHERY4649

RELATED PATTERNS
Dapr
MESHERY45f7
EXAMPLE LABELS AND ANNOTATIONS
Description
This design contains example of how label and annotation can be created and organised
Caveats and Considerations
No caveats
Technologies
Related Patterns
Dapr
MESHERY45f7
Exploring Kubernetes Pods With Meshery

MESHERY44f5

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
EXPLORING KUBERNETES PODS WITH MESHERY
Description
This design maps to the "Exploring Kubernetes Pods with Meshery" tutorial and is the end result of the design. It can be used to quickly deploy an nginx pod exposed through a service.
Caveats and Considerations
Service type is NodePort.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
External-Dns for Kubernetes

MESHERY4db0

EXTERNAL-DNS FOR KUBERNETES
Description
ExternalDNS synchronizes exposed Kubernetes Services and Ingresses with DNS providers. Kubernetes' cluster-internal DNS server, ExternalDNS makes Kubernetes resources discoverable via public DNS servers. Like KubeDNS, it retrieves a list of resources (Services, Ingresses, etc.) from the Kubernetes API to determine a desired list of DNS records. Unlike KubeDNS, however, it's not a DNS server itself, but merely configures other DNS providers accordingly—e.g. AWS Route 53 or Google Cloud DNS. In a broader sense, ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.
Read moreCaveats and Considerations
For more information and considerations checkout this repo https://github.com/kubernetes-sigs/external-dns/?tab=readme-ov-file
Technologies
Fault-tolerant batch workloads on GKE

MESHERY4b55

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
FAULT-TOLERANT BATCH WORKLOADS ON GKE
Description
A batch workload is a process typically designed to have a start and a completion point. You should consider batch workloads on GKE if your architecture involves ingesting, processing, and outputting data instead of using raw data. Areas like machine learning, artificial intelligence, and high performance computing (HPC) feature different kinds of batch workloads, such as offline model training, batched prediction, data analytics, simulation of physical systems, and video processing. By designing containerized batch workloads, you can leverage the following GKE benefits: An open standard, broad community, and managed service. Cost efficiency from effective workload and infrastructure orchestration and specialized compute resources. Isolation and portability of containerization, allowing the use of cloud as overflow capacity while maintaining data security. Availability of burst capacity, followed by rapid scale down of GKE clusters.
Read moreCaveats and Considerations
Ensure proper networking of components for efficient functioning
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Fortio Server

MESHERY4614

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
FORTIO SERVER
Description
This infrastructure design defines a service and a deployment for a component called Fortio-server **Service: fortio-server-service**- Type: Kubernetes Service - Namespace: Default - Port: Exposes port 8080 - Selector: Routes traffic to pods with the label app: fortio-server - Session Affinity: None - Service Type: ClusterIP - MeshMap Metadata: Describes its relationship with Kubernetes and its category as Scheduling & Orchestration. - Position: Positioned within a graphical representation of infrastructure. **Deployment: fortio-server-deployment** - Type: Kubernetes Deployment - Namespace: Default - Replicas: 1 - Selector: Matches pods with the label app: fortio-server - Pod Template: Specifies a container image for Fortio-server, its resource requests, and a service account. - Container Image: Uses the fortio/fortio:1.32.1 image - MeshMap Metadata: Specifies its parent-child relationship with the fortio-server-service and provides styling information. - Position: Positioned relative to the service within the infrastructure diagram. This configuration sets up a service and a corresponding deployment for Fortio-server in a Kubernetes environment. The service exposes port 8080, while the deployment runs a container with the Fortio-server image. These components are visualized using MeshMap for tracking and visualization purposes.
Read moreCaveats and Considerations
Ensure networking is setup properly and enuough resources are available
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
GCP DataMesh

MESHERY4e13

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
GCP DATAMESH
Description
*Implementing Data Mesh on Google Cloud Platform:* Leverage Google Cloud's comprehensive suite of data services, including BigQuery, Dataflow, and Dataproc, to build a scalable and flexible Data Mesh platform. Utilize tools like Dataplex to create a unified data catalog and metadata management system, facilitating data discovery and access. Implement robust data governance policies and access controls using tools like Cloud IAM and DLP to ensure data security and compliance.
Read moreCaveats and Considerations
This design is a reference architecture only, and as such is not immediately deployable.
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
Gerrit operator

MESHERY4f6c

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
GERRIT OPERATOR
Description
This YAML configuration defines a Kubernetes Deployment named "gerrit-operator-deployment" for managing a containerized application called "gerrit-operator". It specifies that one replica of the application should be deployed. The Deployment ensures that the application is always running by managing pod replicas based on the provided selector labels. The template section describes the pod specification, including labels, service account, security context, and container configuration. The container named "gerrit-operator-container" is configured with an image from a container registry, with resource limits and requests defined for CPU and memory. Environment variables are set for various parameters like the namespace, pod name, and platform type. Additionally, specific intervals for syncing Gerrit projects and group members are defined. Further configuration options can be added as needed, such as volumes and initContainers.
Read moreCaveats and Considerations
1. Resource Requirements: Ensure that the resource requests and limits specified for CPU and memory are appropriate for the workload and the cluster's capacity to prevent performance issues or resource contention. 2. Image Pull Policy: The imagePullPolicy set to "Always" ensures that the latest image version is always pulled from the container registry. This may increase deployment time and consume more network bandwidth, so consider the trade-offs based on your deployment requirements. 3. Security Configuration: The security context settings, such as runAsNonRoot and allowPrivilegeEscalation: false, enhance pod security by enforcing non-root user execution and preventing privilege escalation. Verify that these settings align with your organization's security policies. 4. Environment Variables: Review the environment variables set for WATCH_NAMESPACE, POD_NAME, PLATFORM_TYPE, GERRIT_PROJECT_SYNC_INTERVAL, and GERRIT_GROUP_MEMBER_SYNC_INTERVAL to ensure they are correctly configured for your deployment environment and application requirements.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
GlusterFS Service

MESHERY4aa9
GLUSTERFS SERVICE
Description
GlusterFS is implemented as a distributed storage backend that spans multiple nodes, ensuring high availability and data redundancy. This design typically includes components such as GlusterFS servers deployed across Kubernetes nodes, which collectively form a distributed storage pool accessible to applications running in the cluster. The design leverages Kubernetes' persistent volume framework to dynamically provision and manage storage volumes backed by GlusterFS, enabling applications to store and retrieve data seamlessly.
Read moreCaveats and Considerations
While GlusterFS offers scalability, performance can vary depending on the workload and network conditions. Latency issues may arise
Technologies
GuestBook App

MESHERY4a54

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
GUESTBOOK APP
Description
The GuestBook App is a cloud-native application designed using Kubernetes as the underlying orchestration and management system. It consists of various services and components deployed within Kubernetes namespaces. The default namespace represents the main environment where the application operates. The frontend-cyrdx service is responsible for handling frontend traffic and is deployed as a Kubernetes service with a selector for the guestbook application and frontend tier. The frontend-fsfct deployment runs multiple replicas of the frontend component, which utilizes the gb-frontend image and exposes port 80. The guestbook namespace serves as a logical grouping for components related to the GuestBook App. The redis-follower-armov service handles follower Redis instances for the backend, while the redis-follower-nwlew deployment manages multiple replicas of the follower Redis container. The redis-leader-fhxla deployment represents the leader Redis container, and the redis-leader-vjtmi service exposes it as a Kubernetes service. These components work together to create a distributed and scalable architecture for the GuestBook App, leveraging Kubernetes for container orchestration and management.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
GuestBook App

MESHERY4b31

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
GUESTBOOK APP
Description
The GuestBook App is a cloud-native application designed using Kubernetes as the underlying orchestration and management system. It consists of various services and components deployed within Kubernetes namespaces. The default namespace represents the main environment where the application operates. The frontend-cyrdx service is responsible for handling frontend traffic and is deployed as a Kubernetes service with a selector for the guestbook application and frontend tier. The frontend-fsfct deployment runs multiple replicas of the frontend component, which utilizes the gb-frontend image and exposes port 80. The guestbook namespace serves as a logical grouping for components related to the GuestBook App. The redis-follower-armov service handles follower Redis instances for the backend, while the redis-follower-nwlew deployment manages multiple replicas of the follower Redis container. The redis-leader-fhxla deployment represents the leader Redis container, and the redis-leader-vjtmi service exposes it as a Kubernetes service. These components work together to create a distributed and scalable architecture for the GuestBook App, leveraging Kubernetes for container orchestration and management.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
GuestBook App (Copy)

MESHERY4263

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
GUESTBOOK APP (COPY)
Description
The GuestBook App is a cloud-native application designed using Kubernetes as the underlying orchestration and management system. It consists of various services and components deployed within Kubernetes namespaces. The default namespace represents the main environment where the application operates. The frontend-cyrdx service is responsible for handling frontend traffic and is deployed as a Kubernetes service with a selector for the guestbook application and frontend tier. The frontend-fsfct deployment runs multiple replicas of the frontend component, which utilizes the gb-frontend image and exposes port 80. The guestbook namespace serves as a logical grouping for components related to the GuestBook App. The redis-follower-armov service handles follower Redis instances for the backend, while the redis-follower-nwlew deployment manages multiple replicas of the follower Redis container. The redis-leader-fhxla deployment represents the leader Redis container, and the redis-leader-vjtmi service exposes it as a Kubernetes service. These components work together to create a distributed and scalable architecture for the GuestBook App, leveraging Kubernetes for container orchestration and management.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Guestbook App (All-in-One)

MESHERY4b20

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
GUESTBOOK APP (ALL-IN-ONE)
Description
This is a sample guestbook app to demonstrate distributed systems
Caveats and Considerations
1. Ensure networking is setup properly. 2. Ensure enough disk space is available
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
HAProxy_Ingress_Controller

MESHERY45dd

HAPROXY_INGRESS_CONTROLLER
Description
HAProxy Ingress is a Kubernetes ingress controller: it configures a HAProxy instance to route incoming requests from an external network to the in-cluster applications. The routing configurations are built reading specs from the Kubernetes cluster. Updates made to the cluster are applied on the fly to the HAProxy instance.
Read moreCaveats and Considerations
Make sure that paths in ingress are configured correctly and for more Caveats And Considerations checkout this docs https://haproxy-ingress.github.io/docs/
Technologies
Hashicorp Vault

MESHERY4894
HASHICORP VAULT
Description
The Vault server cluster can run directly on Kubernetes. This can be used by applications running within Kubernetes as well as external to Kubernetes, as long as they can communicate to the server via the network. Accessing and Storing Secrets: Applications using the Vault service running in Kubernetes can access and store secrets from Vault using a number of different secret engines and authentication methods. Running a Highly Available Vault Service: By using pod affinities, highly available backend storage (such as Consul) and auto-unseal, Vault can become a highly available service in Kubernetes. Encryption as a Service: Applications using the Vault service running in Kubernetes can leverage the Transit secret engine as "encryption as a service". This allows applications to offload encryption needs to Vault before storing data at rest. Audit Logs for Vault: Operators can choose to attach a persistent volume to the Vault cluster which can be used to store audit logs. And more! Vault can run directly on Kubernetes, so in addition to the native integrations provided by Vault itself, any other tool built for Kubernetes can choose to leverage Vault.
Read moreCaveats and Considerations
refer for Caveats And Considerations from this docs https://developer.hashicorp.com/vault/docs/platform/k8s
Technologies
Hello Kubernetes Tutorial

MESHERY4d00

RELATED PATTERNS
Acme Operator
MESHERY4627
HELLO KUBERNETES TUTORIAL
Description
This tutorial will get you up and running with Dapr in a Kubernetes cluster. You will be deploying the same applications from Hello World. To recap, the Python App generates messages and the Node app consumes and persists them.
Caveats and Considerations
make sure to deploy dapr helm chart into meshery playground before deplying this application including crd's , so that native dapr objects can come into consideration
Technologies
Related Patterns
Acme Operator
MESHERY4627
Hello WASM

MESHERY4255

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
HELLO WASM
Description
Sample WASM implementation with service mesh
Caveats and Considerations
No caveats
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
Hierarchical Parent Relationship

MESHERY4a65

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
HIERARCHICAL PARENT RELATIONSHIP
Description
A relationship that defines whether a component can be a parent of other components. Eg: Namespace is Parent and Role, ConfigMap are children.
Caveats and Considerations
""
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Hierarchical Inventory Relationship

MESHERY4c5a

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
HIERARCHICAL INVENTORY RELATIONSHIP
Description
A hierarchical inventory relationship in which the configuration of (parent) component is patched with the configuration of child component. Eg: The configuration of the Deployment (parent) component is patched with the configuration as received from ConfigMap (child) component.
Read moreCaveats and Considerations
NA
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Hierarchical-Parent-Inventory-Relationship

MESHERY48e8

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
HIERARCHICAL-PARENT-INVENTORY-RELATIONSHIP
Description
This represents a parent-child relationship between components, where one component is a dependency of another. Parent-child relationships show clear lineage, similar to a family tree. In this design, the namespace serves as a parent to the config map and role.
Read moreCaveats and Considerations
Dependency Awareness: Users should understand that the child components (e.g., config maps, roles) depend on the parent (e.g., namespace). If the parent component is deleted or modified, it will affect the child components.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Hierarchical-Parent-Inventory-Relationship

MESHERY4ebe

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
HIERARCHICAL-PARENT-INVENTORY-RELATIONSHIP
Description
This represents a parent-child relationship between components, where one component is a dependency of another. Parent-child relationships show clear lineage, similar to a family tree. In this design, the namespace serves as a parent to the config map and role.
Read moreCaveats and Considerations
Dependency Awareness: Users should understand that the child components (e.g., config maps, roles) depend on the parent (e.g., namespace). If the parent component is deleted or modified, it will affect the child components.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Hierarchical-Parent-Wallet-Relationship

MESHERY4f15

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
HIERARCHICAL-PARENT-WALLET-RELATIONSHIP
Description
In a hierarchical-parent-wallet relationship, one component (the "wallet") serves as a container or host for another component, similar to a parent-child structure. In this example, the WASM plugin acts as the parent (or wallet) that "contains" the WASM filter, representing the idea that the filter operates within the scope and capabilities provided by the plugin. For instance, in a service mesh like Istio, a WASM plugin is deployed into an Envoy proxy and serves as the runtime environment for a WASM filter. A custom WASM filter may be designed to modify HTTP requests (e.g., by adding a security header), but it relies on the plugin to intercept network traffic and integrate with the proxy’s pipeline. The plugin manages the lifecycle of the filter, ensuring it is executed whenever relevant traffic is processed. Without the plugin, the filter would not be able to apply its logic, emphasizing how the plugin enables the filter’s operation. This wallet nature highlights the idea that the plugin acts as a container that securely "holds" the filter, providing it with the necessary infrastructure and environment to function. Just as a wallet holds its contents, the plugin ensures the filter operates properly within the boundaries and resources it provides, without which the filter would not function independently.
Read moreCaveats and Considerations
Understand that the child component (e.g., the filter) is dependent on the parent component (e.g., the plugin). If the parent (wallet) is not correctly configured or functional, the child component will not work as expected. Ensure the parent component is properly configured.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
HorizontalPodAutoscaler

MESHERY41d1

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
HORIZONTALPODAUTOSCALER
Description
A HorizontalPodAutoscaler (HPA for short) automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand Horizontal scaling means that the response to increased load is to deploy more Pods. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload. If the load decreases, and the number of Pods is above the configured minimum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale back down.
Read moreCaveats and Considerations
Modify deployments and names according to requirement
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
ISCSI pod creation

MESHERY4f48

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
ISCSI POD CREATION
Description
Connect Kubernetes clusters to iSCSI devices for scalable storage solutions, supporting direct or multipath connections with CHAP authentication.
Caveats and Considerations
Ensure compatibility of Kubernetes and iSCSI versions, configure network settings appropriately, and monitor performance and scalability of both storage and network infrastructure.
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
Install-Traefik-as-ingress-controller

MESHERY4796

INSTALL-TRAEFIK-AS-INGRESS-CONTROLLER
Description
This design creates a ServiceAccount, DaemonSet, Service, ClusterRole, and ClusterRoleBinding resources for Traefik. The DaemonSet ensures that a single Traefik instance is deployed on each node in the cluster, facilitating load balancing and routing of incoming traffic. The Service allows external traffic to reach Traefik, while the ClusterRole and ClusterRoleBinding provide the necessary permissions for Traefik to interact with Kubernetes resources such as services, endpoints, and ingresses. Overall, this setup enables Traefik to efficiently manage ingress traffic within the Kubernetes environment, providing features like routing, load balancing, and SSL termination.
Read moreCaveats and Considerations
-Resource Utilization: Ensure monitoring and scalability to manage resource consumption across nodes, especially in large clusters. -Security Measures: Implement strict access controls and firewall rules to protect Traefik's admin port (8080) from unauthorized access. -Configuration Complexity: Understand Traefik's configuration intricacies for routing rules and SSL termination to avoid misconfigurations. -Compatibility Testing: Regularly test Traefik's compatibility with Kubernetes and other cluster components before upgrading versions. -High Availability Setup: Employ strategies like pod anti-affinity rules to ensure Traefik's availability and uptime. -Performance Optimization: Conduct performance tests to minimize latency and overhead introduced by Traefik in the data path.
Read moreTechnologies
Istio Control Plane

MESHERY4a09

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
ISTIO CONTROL PLANE
Description
This design includes an Istio control plane, which will deploy to the istio-system namespace by default.
Caveats and Considerations
No namespaces are annotated for sidecar provisioning in this design.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Istio HTTP Header Filter (Clone)

MESHERY4bfd

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
ISTIO HTTP HEADER FILTER (CLONE)
Description
This is a test design
Caveats and Considerations
NA
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Istio Operator

MESHERY4a76

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
ISTIO OPERATOR
Description
This YAML defines a Kubernetes Deployment for the Istio Operator within the istio-operator namespace. The deployment ensures a single replica of the Istio Operator pod is always running, which is managed by a service account named istio-operator. The deployment's metadata includes the namespace and the deployment name. The pod selector matches pods with the label name: istio-operator, ensuring the correct pods are managed. The pod template specifies metadata and details for the containers, including the container name istio-operator and the image gcr.io/istio-testing/operator:1.5-dev, which runs the istio-operator command with the server argument.
Read moreCaveats and Considerations
1. Namespace Configuration: Ensure that the istio-operator namespace exists before applying this deployment. If the namespace is not present, the deployment will fail. 2. Image Version: The image specified (gcr.io/istio-testing/operator:1.5-dev) is a development version. It is crucial to verify the stability and compatibility of this version for production environments. Using a stable release version is generally recommended. 3. Resource Allocation: The resource limits and requests are set to specific values (200m CPU, 256Mi memory for limits; 50m CPU, 128Mi memory for requests). These values should be reviewed and adjusted based on the actual resource availability and requirements of your Kubernetes cluster to prevent resource contention or overallocation. 4. Leader Election: The environment variables include LEADER_ELECTION_NAMESPACE which is derived from the pod's namespace. Ensure that the leader election mechanism is properly configured and that only one instance of the operator becomes the leader to avoid conflicts. 5. Security Context: The deployment does not specify a security context for the container. It is advisable to review and define appropriate security contexts to enhance the security posture of the deployment, such as running the container as a non-root user.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
JAX 'Hello World' using NVIDIA GPUs A100-80GB on GKE

MESHERY4cfd

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
JAX 'HELLO WORLD' USING NVIDIA GPUS A100-80GB ON GKE
Description
JAX is a rapidly growing Python library for high-performance numerical computing and machine learning (ML) research. With applications in large language models, drug discovery, physics ML, reinforcement learning, and neural graphics, JAX has seen incredible adoption in the past few years. JAX offers numerous benefits for developers and researchers, including an easy-to-use NumPy API, auto differentiation and optimization. JAX also includes support for distributed processing across multi-node and multi-GPU systems in a few lines of code, with accelerated performance through XLA-optimized kernels on NVIDIA GPUs. We show how to run JAX multi-GPU-multi-node applications on GKE (Google Kubernetes Engine) using the A2 ultra machine series, powered by NVIDIA A100 80GB Tensor Core GPUs. It runs a simple Hello World application on 4 nodes with 8 processes and 8 GPUs each.
Read moreCaveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Jaeger operator

MESHERY4ab9

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
JAEGER OPERATOR
Description
This YAML configuration defines a Kubernetes Deployment for the Jaeger Operator. This Deployment, named "jaeger-operator," specifies that a container will be created using the jaegertracing/jaeger-operator:master image. The container runs with the argument "start," which initiates the operator's main process. Additionally, the container is configured with an environment variable, LOG-LEVEL, set to "debug," enabling detailed logging for troubleshooting and monitoring purposes. This setup allows the Jaeger Operator to manage Jaeger tracing instances within the Kubernetes cluster, ensuring efficient deployment, scaling, and maintenance of distributed tracing components.
Read moreCaveats and Considerations
1. Image Tag: The image tag master indicates that the latest, potentially unstable version of the Jaeger Operator is being used. For production environments, it's safer to use a specific, stable version to avoid unexpected issues. 2. Resource Limits and Requests: The deployment does not specify resource requests and limits for the container. It's crucial to define these to ensure that the Jaeger Operator has enough CPU and memory to function correctly, while also preventing it from consuming excessive resources on the cluster. 3. Replica Count: The spec section does not specify the number of replicas for the deployment. By default, Kubernetes will create one replica, which might not provide high availability. Consider increasing the replica count for redundancy. 4. Namespace: The deployment does not specify a namespace. Ensure that the deployment is applied to the appropriate namespace, particularly if you have a multi-tenant cluster. 5. Security Context: There is no security context defined. Adding a security context can enhance the security posture of the container by restricting permissions and enforcing best practices like running as a non-root user.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Jenkins operator

MESHERY42f3

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
JENKINS OPERATOR
Description
This YAML configuration defines a Kubernetes Deployment for the Jenkins Operator, ensuring the deployment of a single instance within the cluster. It specifies metadata including labels and annotations for identification and description purposes. The deployment is set to run one replica of the Jenkins Operator container, configured with security settings to run as a non-root user and disallow privilege escalation. Environment variables are provided for dynamic configuration within the container, such as the namespace and Pod name. Resource requests and limits are also defined to manage CPU and memory allocation effectively. Overall, this Deployment aims to ensure the smooth and secure operation of the Jenkins Operator within the Kubernetes environment.
Read moreCaveats and Considerations
1. Resource Allocation: The CPU and memory requests and limits defined in the configuration should be carefully adjusted based on the workload and available resources in the Kubernetes cluster to avoid resource contention and potential performance issues. 2. Image Repository Access: Ensure that the container image specified in the configuration (myregistry/jenkins-operator:latest) is accessible from the Kubernetes cluster. Proper image pull policies and authentication mechanisms should be configured to allow the Kubernetes nodes to pull the image from the specified registry. 3. Security Context: The security settings configured in the security context of the container (runAsNonRoot, allowPrivilegeEscalation) are essential for maintaining the security posture of the Kubernetes cluster. Ensure that these settings align with your organization's security policies and best practices. 4. Environment Variables: The environment variables defined in the configuration, such as WATCH_NAMESPACE, POD_NAME, OPERATOR_NAME, and PLATFORM_TYPE, are used to dynamically configure the Jenkins Operator container. Ensure that these variables are correctly set to provide the necessary context and functionality to the operator.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
K8's-Cluster-overprovisioner

MESHERY4aab

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
K8'S-CLUSTER-OVERPROVISIONER
Description
This design provide a buffer for cluster autoscaling to allow overprovisioning of cluster nodes. This is desired when you have work loads that need to scale up quickly without waiting for the new cluster nodes to be created and join the cluster. It works by creating a deployment that creates pods of a lower than default PriorityClass. These pods request resources from the cluster but don't actually consume any resources. These pods are then evicted allowing other normal pods to be created while also triggering a scale-up by the .
Read moreCaveats and Considerations
get info from this https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
KEDA HTTPRequestsScaler

MESHERY4637

RELATED PATTERNS
Untitled Design
MESHERY4135
KEDA HTTPREQUESTSSCALER
Description
This design makes use of the external add-on, KEDA HTTP, for event-based autoscaling of HTTP workloads on Kubernetes. See https://artifacthub.io/packages/keda-scaler/keda-official-external-scalers/keda-add-ons-http for details on this specific scaler. The KEDA HTTP Add-on allows Kubernetes users to automatically scale their HTTP servers up and down (including to/from zero) based on incoming HTTP traffic.
Read moreCaveats and Considerations
KEDA scalers can both detect if a deployment should be activated or deactivated, and feed custom metrics for a specific event source.
Technologies
Related Patterns
Untitled Design
MESHERY4135
KEDA HTTPRequestsScaler

MESHERY4462

RELATED PATTERNS
Untitled Design
MESHERY4135
KEDA HTTPREQUESTSSCALER
Description
This design makes use of the external add-on, KEDA HTTP, for event-based autoscaling of HTTP workloads on Kubernetes. See https://artifacthub.io/packages/keda-scaler/keda-official-external-scalers/keda-add-ons-http for details on this specific scaler. The KEDA HTTP Add-on allows Kubernetes users to automatically scale their HTTP servers up and down (including to/from zero) based on incoming HTTP traffic. In order to do so, KEDA HTTP add-on, deploys a proxy service and requires all traffic to be routed via this proxy service. The proxy service is deployed automatically by the KEDA add-on operator, the name for the deployed service follows the following scheme "keda-add-ons-http-interceptor-proxy".
Read moreCaveats and Considerations
1. The dependent design "KEDA Setup", needs to be deployed first for the overall design to function properly. 2. After deploying the design, initiate the performance test on the exposed endpoint for the service named "keda-add-ons-http-interceptor-proxy", ensuring to include "httpbin.com" as the host header if utilizing a different host.
Read moreTechnologies
Related Patterns
Untitled Design
MESHERY4135
KEDA Setup

MESHERY4b6f

RELATED PATTERNS
Untitled Design
MESHERY4135
KEDA SETUP
Description
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes. The design deploys essential KEDA components (including CRDs) to ensure your cluster is properly setup for autoscaling using KEDA.
Read moreCaveats and Considerations
For this design to function properly, Kubernetes version v1.23 or above is required.
Technologies
Related Patterns
Untitled Design
MESHERY4135
KEDA Setup

MESHERY4096

RELATED PATTERNS
Untitled Design
MESHERY4135
KEDA SETUP
Description
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes. The design deploys essential KEDA components (including CRDs) to ensure your cluster is properly setup for autoscaling using KEDA.
Read moreCaveats and Considerations
For this design to function properly, Kubernetes version v1.23 or above is required.
Technologies
Related Patterns
Untitled Design
MESHERY4135
Key cloak operator

MESHERY4f70

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
KEY CLOAK OPERATOR
Description
This YAML snippet describes a Kubernetes Deployment for a Keycloak operator, ensuring a single replica. It specifies labels and annotations for metadata, including a service account. The pod template defines a container running the Keycloak operator image, with environment variables set for namespace and pod name retrieval. Security context settings prevent privilege escalation. Probes are configured for liveness and readiness checks on port 8081, with resource requests and limits ensuring proper resource allocation for the container.
Read moreCaveats and Considerations
1. Single Replica: The configuration specifies only one replica, which means there's no built-in redundancy or high availability. Consider adjusting the replica count based on your availability requirements. 2. Resource Allocation: Resource requests and limits are set for CPU and memory. Ensure these values are appropriate for your workload and cluster capacity to avoid performance issues or resource contention. 3. Security Context: The security context is configured to run the container as a non-root user and disallow privilege escalation. Ensure these settings align with your security policies and container requirements. 4. Probes Configuration: Liveness and readiness probes are set up to check the health of the container on port 8081. Ensure that the specified endpoints (/healthz and /readyz) are correctly implemented in the application code. 5. Namespace Configuration: The WATCH_NAMESPACE environment variable is set to an empty string, potentially causing the operator to watch all namespaces. Ensure this behavior aligns with your intended scope of operation and namespace isolation requirements.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Kubernetes Deployment with Azure File Storage

MESHERY487a

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
KUBERNETES DEPLOYMENT WITH AZURE FILE STORAGE
Description
This design sets up a Kubernetes Deployment deploying two NGINX containers. Each container utilizes an Azure File storage volume for shared data. The NGINX instances serve web content while accessing an Azure File share, enabling scalable and shared storage for the web servers.
Caveats and Considerations
1. Azure Configuration: Ensure that your Azure configuration, including secrets, is correctly set up to access the Azure File share.
2. Data Sharing: Multiple NGINX containers share the same storage. Be cautious when handling write operations to avoid conflicts or data corruption.
3. Scalability: Consider the scalability of both NGINX and Azure File storage to meet your application's demands.
4. Security: Safeguard the secrets used to access Azure resources and limit access to only authorized entities.
5. Pod Recovery: Ensure that the pod recovery strategy is well-defined to handle disruptions or node failures.
6. Azure Costs: Monitor and manage costs associated with Azure File storage, as it may incur charges based on usage.
7. Maintenance: Plan for regular maintenance and updates of both NGINX and Azure configurations to address security and performance improvements.
8. Monitoring: Implement monitoring and alerts for both the NGINX containers and Azure File storage to proactively detect and address issues.
9. Backup and Disaster Recovery: Establish a backup and disaster recovery plan to safeguard data stored in Azure File storage.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Kubernetes Engine Training Example

MESHERY40f1

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
KUBERNETES ENGINE TRAINING EXAMPLE
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Kubernetes Global Balancer

MESHERY4390

RELATED PATTERNS
Untitled Design
MESHERY4135
KUBERNETES GLOBAL BALANCER
Description
K8GB ,commonly referred to as GSLB (Global Server Load Balancing) solutions, has been typically the domain of proprietary network software and hardware vendors and installed and managed by siloed network teams. k8gb is a completely open source, cloud native, global load balancing solution for Kubernetes. k8gb focuses on load balancing traffic across geographically dispersed Kubernetes clusters using multiple load balancing strategies to meet requirements such as region failover for high availability. Global load balancing for any Kubernetes Service can now be enabled and managed by any operations or development teams in the same Kubernetes native way as any other custom resource.
Read moreCaveats and Considerations
Key Differentiators Load balancing is based on timeproof DNS protocol which is perfect for global scope and extremely reliable. No dedicated management cluster and no single point of failure. Kubernetes native application health checks utilizing status of Liveness and Readiness probes for load balancing decisions Configuration with a single Kubernetes CRD of Gslb kind refer this repo for more info https://github.com/k8gb-io/k8gb
Read moreTechnologies
Related Patterns
Untitled Design
MESHERY4135
Kubernetes Metrics Server Configuration

MESHERY4892

RELATED PATTERNS
Minecraft App
MESHERY48dd
KUBERNETES METRICS SERVER CONFIGURATION
Description
This design configures the Kubernetes Metrics Server for monitoring cluster-wide resource metrics. It defines a Kubernetes Deployment, Role-Based Access Control (RBAC) rules, and other resources for the Metrics Server's deployment and operation.
Caveats and Considerations
This design configures the Kubernetes Metrics Server for resource monitoring. Ensure that RBAC and ServiceAccount configurations are secure to prevent unauthorized access. Adjust Metrics Server settings for specific metrics and monitor resource usage regularly to prevent resource overuse. Implement probes for reliability and maintain correct API service settings. Plan for scalability and choose the appropriate namespace. Set up monitoring for issue detection and establish data backup and recovery plans. Regularly update components for improved security and performance.
Read moreTechnologies
Related Patterns
Minecraft App
MESHERY48dd
Kubernetes Service for Product Page App

MESHERY4c57

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
KUBERNETES SERVICE FOR PRODUCT PAGE APP
Description
This design installs a namespace, a deployment and a service. Both deployment and service are deployed in my-bookinfo namespace. Service is exposed at port 9081.
Caveats and Considerations
Ensure sufficient resources are available in the cluster and networking is exopsed properly.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Kubernetes cronjob

MESHERY4483

RELATED PATTERNS
Minecraft App
MESHERY48dd
KUBERNETES CRONJOB
Description
This design contains a single Kubernetes Cronjob.
Caveats and Considerations
This design is for learning purposes and may be freely copied and distributed.
Technologies
Related Patterns
Minecraft App
MESHERY48dd
Limit Range

MESHERY4cb9

RELATED PATTERNS
Untitled Design
MESHERY4135
LIMIT RANGE
Description
Kubernetes policies for resource allocation and usage limits within Pods and containers. It provides guidelines on setting constraints such as CPU and memory limits, ensuring efficient resource management across clusters. This design supports enforcing resource quotas at the namespace level, promoting fair resource distribution and preventing resource contention. It emphasizes Kubernetes' flexibility in defining and enforcing resource limits based on application requirements and cluster capacity.
Read moreCaveats and Considerations
Careful consideration is required when setting resource limits to avoid underprovisioning or overprovisioning resources, which can affect application performance and cluster efficiency.
Technologies
Related Patterns
Untitled Design
MESHERY4135
Litmus Chaos Operator

MESHERY4844

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
LITMUS CHAOS OPERATOR
Description
This YAML file defines a Kubernetes Deployment for the Litmus Chaos Operator. It creates a single replica of the chaos-operator pod within the litmus namespace. The deployment is labeled for organization and management purposes, specifying details like the version and component. The container runs the litmuschaos/chaos-operator:ci image with a command to enable leader election and sets various environment variables for operation. Additionally, it uses the litmus service account to manage permissions, ensuring the operator runs with the necessary access rights within the Kubernetes cluster.
Read moreCaveats and Considerations
1. Namespace Watch: The WATCH_NAMESPACE environment variable is set to an empty string, which means the operator will watch all namespaces. This can have security implications and might require broader permissions. Consider restricting it to specific namespaces if not required. 2. Image Tag: The image is set to litmuschaos/chaos-operator:ci, which uses the latest code from the continuous integration pipeline. This might include unstable or untested features. For production environments, it's recommended to use a stable and tagged version of the image. 3. Leader Election: The -leader-elect=true argument ensures high availability by allowing only one active instance of the operator at a time. Ensure that this behavior aligns with your high-availability requirements. 4. Resource Limits and Requests: There are no resource requests or limits defined for the chaos-operator container. It's good practice to specify these to ensure the container has the necessary resources and to prevent it from consuming excessive resources.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Load Balanced AWS Architecture

MESHERY4079

LOAD BALANCED AWS ARCHITECTURE
Description
This design illustrates a robust and scalable architecture for deploying applications on Amazon Web Services (AWS) with load balancing capabilities. This design leverages AWS Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances of your application, ensuring high availability and reliability. The architecture typically includes Auto Scaling groups to automatically adjust the number of running instances based on traffic demand, further enhancing the system’s ability to handle varying loads.
Read moreCaveats and Considerations
1. AWS services can accumulate costs quickly, especially with high traffic volumes and large-scale deployments. It's essential to monitor and manage usage to avoid unexpected expenses. 2. Network latency can be introduced by load balancers and cross-region data transfers. It's important to design your architecture to minimize latency, particularly for latency-sensitive applications.
Read moreTechnologies
Match-Labels-Relationship

MESHERY4076

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MATCH-LABELS-RELATIONSHIP
Description
A Match Labels Relationship in Meshery refers to the configuration where Kubernetes components are linked based on shared labels. This relationship is essential for identifying and managing groups of Pods, Services, or other resources that share common characteristics. By using matching labels, Kubernetes can efficiently route traffic, apply policies, and manage workloads across components that are part of the same logical grouping.
Read moreCaveats and Considerations
1. Ensure that labels are consistently applied across related Kubernetes components. Inconsistent or misspelled labels can lead to unexpected behavior and make it difficult to manage or identify related resources. 2. Use clear and descriptive labels that convey the purpose or role of the component. This helps with understanding the architecture and functionality of your application, making it easier to manage and troubleshoot. 3. Consider how label relationships will affect scaling and updating strategies. Changes to labels or the addition of new components may require adjustments to selectors and configurations to maintain proper functionality and resource management across your Kubernetes environment.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Mattermost Cluster Install

MESHERY41c2

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MATTERMOST CLUSTER INSTALL
Description
The cluster-installation service is based on the Mattermost Operator model and operates at version 0.3.3. It is responsible for managing the installation and configuration of the Mattermost operator in default namespace
Caveats and Considerations
Ensure sufficient resources are available in the cluster
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Meshery v0.6.73

MESHERY4b52

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MESHERY V0.6.73
Description
A self-service engineering platform, Meshery, is the open source, cloud native manager that enables the design and management of all Kubernetes-based infrastructure and applications. Among other features, As an extensible platform, Meshery offers visual and collaborative GitOps, freeing you from the chains of YAML while managing Kubernetes multi-cluster deployments.
Read moreCaveats and Considerations
Not for Production deployment. Does not include Meshery Cloud.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Minecraft App

MESHERY48dd

RELATED PATTERNS
Apache Airflow
MESHERY41d4
MINECRAFT APP
Description
Deploying minecraft application
Caveats and Considerations
Works on k8s 1.25V only
Technologies
Related Patterns
Apache Airflow
MESHERY41d4
Mount(Pod -> PersistentVolume)

MESHERY429b

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MOUNT(POD -> PERSISTENTVOLUME)
Description
A relationship that represents volume mounts between components. Eg: The Pod component is binded to the PersistentVolume component via the PersistentVolumeClaim component.
Caveats and Considerations
NA
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Multi tenancy Virtual Cluster

MESHERY485b

RELATED PATTERNS
Minecraft App
MESHERY48dd
MULTI TENANCY VIRTUAL CLUSTER
Description
Virtual clusters are fully functional Kubernetes clusters nested inside a physical host cluster providing better isolation and flexibility to support multi-tenancy. Multiple teams can operate independently within the same physical infrastructure while minimizing conflicts, maximizing autonomy, and reducing costs. Virtual clusters run inside host cluster namespaces but function as separate Kubernetes clusters, with their own API server, control plane, syncer, and set of resources. While virtual clusters share the physical resources of the host cluster (such as CPU, memory, and storage), they manage their resources independently, allowing for efficient utilization and scaling. Virtual clusters interact with the host cluster for resource scheduling and networking but maintain a level of abstraction to ensure operations within a virtual cluster don't directly affect the host cluster's global state.
Read moreCaveats and Considerations
To get to know about Caveats And Considerations checkout this docs https://www.vcluster.com/docs/vcluster
Technologies
Related Patterns
Minecraft App
MESHERY48dd
My first k8s app

MESHERY496d

RELATED PATTERNS
Minecraft App
MESHERY48dd
MY FIRST K8S APP
Description
This is a simple kubernetes workflow application that has deployment, pods and service. This is first design used for eexploring Meshery Cloud platform
Caveats and Considerations
No caveats; Free to reuse
Technologies
Related Patterns
Minecraft App
MESHERY48dd
MySQL Deployment

MESHERY492d

RELATED PATTERNS
Minecraft App
MESHERY48dd
MYSQL DEPLOYMENT
Description
This is a simple SQL deployment that would install a k8s deployment, volume and a service.
Caveats and Considerations
No caveats. Ensure the ports are exposed accurately.
Technologies
Related Patterns
Minecraft App
MESHERY48dd
MySQL installation with cinder volume plugin

MESHERY4693

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MYSQL INSTALLATION WITH CINDER VOLUME PLUGIN
Description
Cinder is a Block Storage service for OpenStack. It can be used as an attachment mounted to a pod in Kubernetes.
Caveats and Considerations
Currently the cinder volume plugin is designed to work only on linux hosts and offers ext4 and ext3 as supported fs types Make sure that kubelet host machine has the following executables
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
NGINX deployment

MESHERY4981

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
NGINX DEPLOYMENT
Description
This design is for learning purposes and may be freely copied and distributed.
Caveats and Considerations
This design contains nginx deployment
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Network policy

MESHERY4da3

NETWORK POLICY
Description
If you want to control traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols, then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network. NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to other connections.
Read moreCaveats and Considerations
This is an sample network policy with ingress,egress defined , change according to your requirements
Technologies
Network(Service -> Endpoint)

MESHERY440f

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
NETWORK(SERVICE -> ENDPOINT)
Description
A relationship that defines network edges between components. In the design Edge network relationship defines a network configuration for managing services and endpoints in a Kubernetes environment. This design shows the relationship between two Kubernetes components Endpoint and Service.
Read moreCaveats and Considerations
NA
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Nginx Controller

MESHERY4bcf

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
NGINX CONTROLLER
Description
Nginx controller
Caveats and Considerations
Deploy with a namespace
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Node Problem Detector

MESHERY4a51

RELATED PATTERNS
Minecraft App
MESHERY48dd
NODE PROBLEM DETECTOR
Description
node-problem-detector aims to make various node problems visible to the upstream layers in the cluster management stack. It is a daemon that runs on each node, detects node problems and reports them to apiserver. node-problem-detector can either run as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) or run standalone. Now it is running as a [Kubernetes Addon](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) enabled by default in the GKE cluster. It is also enabled by default in AKS as part of the [AKS Linux Extension](https://learn.microsoft.com/en-us/azure/aks/faq#what-is-the-purpose-of-the-aks-linux-extension-i-see-installed-on-my-linux-vmss-instances). There are tons of node problems that could possibly affect the pods running on the node, such as: Infrastructure daemon issues: ntp service down; Hardware issues: Bad CPU, memory or disk; Kernel issues: Kernel deadlock, corrupted file system; Container runtime issues: Unresponsive runtime daemon; ... Currently, these problems are invisible to the upstream layers in the cluster management stack, so Kubernetes will continue scheduling pods to the bad nodes. To solve this problem, we introduced this new daemon node-problem-detector to collect node problems from various daemons and make them visible to the upstream layers. Once upstream layers have visibility to those problems, we can discuss the remedy system.
Read moreCaveats and Considerations
node-problem-detector uses Event and NodeCondition to report problems to apiserver. NodeCondition: Permanent problem that makes the node unavailable for pods should be reported as NodeCondition. Event: Temporary problem that has limited impact on pod but is informative should be reported as Event. For more Caveats And Considerations checkout this https://github.com/kubernetes/node-problem-detector
Read moreTechnologies
Related Patterns
Minecraft App
MESHERY48dd
Nodejs-kubernetes-microservices

MESHERY496f

RELATED PATTERNS
Minecraft App
MESHERY48dd
NODEJS-KUBERNETES-MICROSERVICES
Description
NA
Caveats and Considerations
NA
Technologies
Related Patterns
Minecraft App
MESHERY48dd
Online Boutique

MESHERY498b

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
ONLINE BOUTIQUE
Description
Google's Microservices sample app is named Online Boutique. Docs - https://docs.meshery.io/guides/sample-apps#online-boutique Source - https://github.com/GoogleCloudPlatform/microservices-demo
Caveats and Considerations
N/A
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Persistence-volume-claim

MESHERY4671

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
PERSISTENCE-VOLUME-CLAIM
Description
Defines a Kubernetes PersistentVolumeClaim (PVC) requesting 10Gi storage with 'manual' storage class. Supports both ReadWriteMany and ReadWriteOnce access modes, with optional label-based PV selection. Carefully adjust storage size for specific storage solutions, and consider annotations, security, monitoring, and scalability needs.
Read moreCaveats and Considerations
Ensure that the chosen storageClassName is properly configured and available in your cluster. Be cautious about the ReadWriteMany and ReadWriteOnce access modes, as they impact compatibility with PersistentVolumes (PVs). The selector should match existing PVs in your cluster if used. Adjust the storage size to align with your storage solution, keeping in mind the AWS EFS special case. Review the need for annotations, confirm the namespace, and implement security measures. Monitor and set up alerts for your PVC, and plan for backup and disaster recovery. Lastly, ensure scalability to meet your application's storage requirements.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Persistent Volume

MESHERY4f33

RELATED PATTERNS
Minecraft App
MESHERY48dd
PERSISTENT VOLUME
Description
The "Persistent Volume" design enables Kubernetes clusters to manage stateful applications by providing a reliable and consistent storage solution. This design ensures that data remains intact and accessible even if pods are rescheduled or fail.
Caveats and Considerations
1. Ensure that the selected storage class supports your desired storage backend and meets the performance and availability requirements of your applications. 2. Misconfigurations can lead to suboptimal performance or even data loss.
Technologies
Related Patterns
Minecraft App
MESHERY48dd
Pod Life Cycle

MESHERY437a

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
POD LIFE CYCLE
Description
This design emphasizes Kubernetes' ability to manage Pod life cycles autonomously, ensuring efficient resource utilization and application availability. It addresses considerations such as Pod initialization, readiness, liveness, scaling, and graceful termination, providing a comprehensive framework for deploying and managing containerized applications on Kubernetes clusters.
Read moreCaveats and Considerations
Developers and operators need to carefully configure readiness and liveness probes to accurately reflect application health. Improper configuration may lead to unnecessary restarts or erroneous scaling decisions, impacting application stability and performance. Additionally, managing Pod life cycles across large-scale deployments requires efficient monitoring and logging frameworks to diagnose and resolve issues promptly.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Pod Liveness

MESHERY4a7e

RELATED PATTERNS
Pod Readiness
MESHERY4b83
POD LIVENESS
Description
Continuous availability and health of Kubernetes Pods. It defines liveness probes that periodically check the state of application instances within Pods. These probes determine if Pods are responsive and functioning correctly based on configured criteria, such as HTTP requests or custom executable scripts. By automatically restarting Pods that fail these checks, this design enhances application reliability and uptime, ensuring seamless operation and minimal disruption to services in Kubernetes environments.
Read moreCaveats and Considerations
No caveats
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
Pod Multi Containers

MESHERY436c

RELATED PATTERNS
Untitled Design
MESHERY4135
POD MULTI CONTAINERS
Description
"Pod Multi Containers" design facilitates the deployment of Kubernetes Pods that consist of multiple containers, each serving a distinct role within a single cohesive unit.
Caveats and Considerations
No caveats
Technologies
Related Patterns
Untitled Design
MESHERY4135
Pod Node Affinity

MESHERY4134

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
POD NODE AFFINITY
Description
By defining node affinity rules, this design ensures that Pods are deployed on nodes with specific labels, such as hardware capabilities or geographical location, aligning with application requirements and operational policies. This capability enhances workload performance, optimizes resource utilization, and supports efficient workload distribution across Kubernetes clusters, enhancing scalability and fault tolerance in distributed computing environments.
Read moreCaveats and Considerations
While node affinity provides powerful capabilities for workload optimization, improper configuration or overly restrictive rules can lead to uneven distribution of Pods across nodes, potentially underutilizing cluster resources or causing nodes to become overwhelmed. Careful consideration of node labels, resource requirements, and workload characteristics is necessary to achieve balanced resource allocation and maximize cluster efficiency. Additionally, changes in node labels or availability might impact Pod scheduling, necessitating regular review and adjustment of node affinity rules to maintain optimal deployment strategies.
Read moreTechnologies
Related Patterns
Pod Resource Request
MESHERY4a23
Pod Priviledged Simple

MESHERY4568

RELATED PATTERNS
Untitled Design
MESHERY4135
POD PRIVILEDGED SIMPLE
Description
This design configuration involves running Kubernetes pods with privileged access, which grants them elevated permissions within their host environment. This setup is typically used when applications or services require access to privileged resources or functionalities that are not available in standard pod configurations.
Read moreCaveats and Considerations
Careful consideration and adherence to best practices are crucial to maintain the balance between operational flexibility and security when implementing the "Pod Privileged Simple" design in Kubernetes environments.
Technologies
Related Patterns
Untitled Design
MESHERY4135
Pod Readiness

MESHERY4b83

RELATED PATTERNS
Robot Shop Sample App
MESHERY4c4e
POD READINESS
Description
Pod readiness in Kubernetes indicates when a Pod is prepared to handle requests and execute its intended tasks. It hinges on the successful initialization of its containers and the positive response from readiness probes, which verify the health and operational readiness of the Pod's components. This readiness status is crucial for ensuring that services can safely direct traffic to the Pod without encountering errors or delays caused by incomplete initialization or unavailability. Managing Pod readiness effectively enhances application reliability and performance by enabling Kubernetes to efficiently distribute Pods across nodes while ensuring they are capable of fulfilling their roles. Regular monitoring and adjustment of readiness probes and configurations are essential for maintaining optimal application responsiveness and resilience in dynamic Kubernetes environments.
Read moreCaveats and Considerations
No caveats
Technologies
Related Patterns
Robot Shop Sample App
MESHERY4c4e
Pod Resource Limit

MESHERY41a3

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
POD RESOURCE LIMIT
Description
This design ensures efficient resource utilization, prevents resource contention, and enhances overall stability and reliability of applications running in Kubernetes.
Caveats and Considerations
Setting appropriate resource limits and requests requires careful consideration of application requirements, monitoring for potential resource bottlenecks, and periodic adjustments to optimize performance and scalability as workload demands evolve.
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
Pod Resource Memory Request Limit

MESHERY45d9

RELATED PATTERNS
Untitled Design
MESHERY4135
POD RESOURCE MEMORY REQUEST LIMIT
Description
This design ensures efficient resource management and optimization in Kubernetes clusters by defining how much memory each pod requests and the maximum it can consume. Memory requests define the amount of memory Kubernetes guarantees to allocate to a pod when scheduling it onto a node, ensuring that sufficient resources are available for the pod to operate without contention.
Read moreCaveats and Considerations
No caveats
Technologies
Related Patterns
Untitled Design
MESHERY4135
Pod Resource Memory Request Limit

MESHERY44ee

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
POD RESOURCE MEMORY REQUEST LIMIT
Description
Define a limit on the amount of resources that a K8s pod can use.
Caveats and Considerations
None
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
Pod Resource Request

MESHERY4a23

RELATED PATTERNS
GCP DataMesh
MESHERY4e13
POD RESOURCE REQUEST
Description
This design focuses on specifying the minimum CPU and memory requirements for Kubernetes Pods. By setting resource requests, this design ensures optimal resource allocation within Kubernetes clusters, thereby enhancing workload performance and maintaining stability across various applications and services. This feature is essential for fine-tuning resource utilization, preventing resource contention, and supporting efficient scaling and management of containerized workloads in cloud-native environments.
Read moreCaveats and Considerations
No caveats
Technologies
Related Patterns
GCP DataMesh
MESHERY4e13
Pod Service Account Token

MESHERY4756
POD SERVICE ACCOUNT TOKEN
Description
Kubernetes Service Account tokens used by Pods. It emphasizes the importance of limiting token permissions to minimize the risk of unauthorized access to Kubernetes API resources. This design advocates for regular rotation of Service Account tokens to mitigate potential security vulnerabilities, ensuring that compromised tokens have a limited lifespan.
Read moreCaveats and Considerations
Administrators must carefully manage Service Account token lifecycles to avoid disruptions in Pod functionality caused by expired tokens. Additionally, strict adherence to least privilege principles is essential when assigning permissions to Service Accounts, as overly permissive tokens can increase the attack surface and compromise cluster security.
Read moreTechnologies
Pod Volume Mount SubPath

MESHERY4e52

RELATED PATTERNS
Minecraft App
MESHERY48dd
POD VOLUME MOUNT SUBPATH
Description
This design demonstrates the usage of Kubernetes' subPathExpr feature to mount a specific sub-path of a volume into a container within a pod. This approach allows for more dynamic and flexible volume mounts, enabling containers to access different parts of a volume based on environment variables or pod metadata. By utilizing subPathExpr, Kubernetes administrators and developers can configure pods to mount unique directories tailored to the specific needs of each container, without needing to create multiple volume definitions. This design is particularly useful in scenarios where you need to differentiate storage paths for various instances of an application or manage data separation within shared volumes.
Read moreCaveats and Considerations
No caveats
Technologies
Related Patterns
Minecraft App
MESHERY48dd
Pod Volume Mount SubPath-expr

MESHERY4fde

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
POD VOLUME MOUNT SUBPATH-EXPR
Description
This design demonstrates the usage of Kubernetes' subPathExpr feature to mount a specific sub-path of a volume into a container within a pod. This approach allows for more dynamic and flexible volume mounts, enabling containers to access different parts of a volume based on environment variables or pod metadata. By utilizing subPathExpr, Kubernetes administrators and developers can configure pods to mount unique directories tailored to the specific needs of each container, without needing to create multiple volume definitions. This design is particularly useful in scenarios where you need to differentiate storage paths for various instances of an application or manage data separation within shared volumes.
Read moreCaveats and Considerations
1. Not all volume types support subPathExpr. Ensure that the volume plugin you are using is compatible with this feature.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Pod Volumes Projected

MESHERY4d18

RELATED PATTERNS
Minecraft App
MESHERY48dd
POD VOLUMES PROJECTED
Description
Kubernetes involves the configuration of projected volumes within pods, allowing them to access multiple sources of data simultaneously. This design enhances the flexibility and functionality of pods by aggregating information from various Kubernetes and non-Kubernetes sources into a unified view accessible within the pod's filesystem.
Read moreCaveats and Considerations
Projected volumes may include sensitive information such as secrets and service account tokens. Care must be taken to ensure that only authorized pods have access to these volumes and that access is tightly controlled to prevent unauthorized access.
Read moreTechnologies
Related Patterns
Minecraft App
MESHERY48dd
Pods Image Pull Policy

MESHERY4c85
PODS IMAGE PULL POLICY
Description
Configuration and management of image pull policies for Kubernetes pods. The image pull policy determines how and when the container images are pulled from the container registry, impacting both the efficiency and reliability of application deployments. Kubernetes provides three image pull policies: Always, IfNotPresent, and Never. 1. Always: The image is always pulled from the registry, ensuring the latest version is used but potentially increasing deployment times and registry load. 2. IfNotPresent: The image is pulled only if it is not already present on the node, optimizing for faster deployments when the image hasn't changed. 3. Never: The image is never pulled from the registry, assuming it is pre-installed on the node, which can be useful in air-gapped environments. This design helps Kubernetes administrators and developers choose the appropriate image pull policy based on their specific needs for development, testing, and production environments.
Read moreCaveats and Considerations
Using the Always policy can lead to increased network dependency and potential delays in deployments if the registry is slow or inaccessible.
Technologies
Prometheus Sample

MESHERY4bea

RELATED PATTERNS
Pod Readiness
MESHERY4b83
PROMETHEUS SAMPLE
Description
This is a simple prometheus montioring design
Caveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
Prometheus adapter

MESHERY406f

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
PROMETHEUS ADAPTER
Description
This YAML configuration defines a Kubernetes Deployment for the prometheus-adapter, a component of the kube-prometheus stack within the monitoring namespace. The deployment manages two replicas of the prometheus-adapter pod to ensure high availability. Each pod runs a container using the prometheus-adapter image from the Kubernetes registry, configured with various command-line arguments to specify settings like the configuration file path, metrics re-list interval, and Prometheus URL.
Read moreCaveats and Considerations
1. Namespace: Ensure that the monitoring namespace exists before deploying this configuration. 2. ConfigMap: Verify that the adapter-config ConfigMap is created and contains the correct configuration data required by the prometheus-adapter. 3. TLS Configuration: The deployment includes TLS settings with specific cipher suites; ensure these align with your security policies and requirements. 4. Resource Allocation: The specified CPU and memory limits and requests should be reviewed to match the expected load and cluster capacity. 5. Service Account: Ensure that the prometheus-adapter service account has the necessary permissions to operate correctly within the cluster
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Prometheus dummy exporter

MESHERY487c

RELATED PATTERNS
Pod Readiness
MESHERY4b83
PROMETHEUS DUMMY EXPORTER
Description
A simple prometheus-dummy-exporter container exposes a single Prometheus metric with a constant value. The metric name, value and port on which it will be served can be passed by flags. This container is then deployed in the same pod with another container, prometheus-to-sd, configured to use the same port. It scrapes the metric and publishes it to Stackdriver. This adapter isn't part of the sample code, but a standard component used by many Kubernetes applications. You can learn more about it from given below link https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd
Read moreCaveats and Considerations
It is only developed for Google Kubernetes Engine to collect metrics from system services in order to support Kubernetes users. We designed the tool to be lean when deployed as a sidecar in your pod. It's intended to support only the metrics the Kubernetes team at Google needs and is not meant for end-users.
Read moreTechnologies
Related Patterns
Pod Readiness
MESHERY4b83
Prometheus-monitoring-ns

MESHERY420f

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
PROMETHEUS-MONITORING-NS
Description
This is a simple prometheus montioring design
Caveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
QAT-TLS-handshake-acceleration-for-Istio.yaml

MESHERY4baf

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
QAT-TLS-HANDSHAKE-ACCELERATION-FOR-ISTIO.YAML
Description
Cryptographic operations are among the most compute-intensive and critical operations when it comes to secured connections. Istio uses Envoy as the “gateways/sidecar” to handle secure connections and intercept the traffic. Depending upon use cases, when an ingress gateway must handle a large number of incoming TLS and secured service-to-service connections through sidecar proxies, the load on Envoy increases. The potential performance depends on many factors, such as size of the cpuset on which Envoy is running, incoming traffic patterns, and key size. These factors can impact Envoy serving many new incoming TLS requests. To achieve performance improvements and accelerated handshakes, a new feature was introduced in Envoy 1.20 and Istio 1.14. It can be achieved with 3rd Gen Intel® Xeon® Scalable processors, the Intel® Integrated Performance Primitives (Intel® IPP) crypto library, CryptoMB Private Key Provider Method support in Envoy, and Private Key Provider configuration in Istio using ProxyConfig.
Read moreCaveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource for custom Intel configuration
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
RBAC for ElasticSearch

MESHERY4af2

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
RBAC FOR ELASTICSEARCH
Description
This infrastructure design defines resources related to Role-Based Access Control (RBAC) for Elasticsearch in a Kubernetes environment. Here's a brief description of the components: 1.) zk (ZooKeeper StatefulSet): - A StatefulSet named zk with 3 replicas is defined to manage ZooKeeper instances. - It uses ordered pod management policy, ensuring that pods are started in order. - ZooKeeper is configured with specific settings, including ports, data directories, and resource requests. - It has affinity settings to avoid running multiple ZooKeeper instances on the same node. - The configuration includes liveness and readiness probes to ensure the health of the pods. 2.) zk-cs (ZooKeeper Service): - A Kubernetes Service named zk-cs is defined to provide access to the ZooKeeper instances. - It exposes the client port (port 2181) used to connect to ZooKeeper. 3.) zk-hs (ZooKeeper Headless Service): - Another Kubernetes Service named `zk-hs` is defined as headless (with cluster IP set to None). - It exposes ports for ZooKeeper server (port 2888) and leader election (port 3888). - This headless service is typically used for direct communication with individual ZooKeeper instances. 4.) **zk-pdb (ZooKeeper PodDisruptionBudget):** - A PodDisruptionBudget named `zk-pdb` is defined to control the maximum number of unavailable ZooKeeper pods to 1. - This ensures that at least one ZooKeeper instance remains available during disruptions.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between pod and services. Ensure sufficient resources are available in the cluster.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Redis Leader Deployment

MESHERY48cc

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
REDIS LEADER DEPLOYMENT
Description
This is a simple deployment of redis leader app. Its deployment includes 1 replica that uses image:docker.io/redis:6.0.5, cpu: 100m, memory: 100Mi and exposes containerPort: 6379
Caveats and Considerations
None
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Redis PHP Guestbook

MESHERY4b52

RELATED PATTERNS
Minecraft App
MESHERY48dd
REDIS PHP GUESTBOOK
Description
This desgin shows you how to build and deploy a simple (not production ready), multi-tier web application using Kubernetes and Docker. This example consists of the following components: 1. A single-instance Redis to store guestbook entries 2. Multiple web frontend instances **Creating the Redis leader Service** The guestbook application needs to communicate to the Redis to write its data. You need to apply a Service to proxy the traffic to the Redis Pod. A Service defines a policy to access the Pods. **Set up Redis followers** Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas. **Creating the Redis follower service** The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another Service.
Read moreCaveats and Considerations
None
Technologies
Related Patterns
Minecraft App
MESHERY48dd
Redis master deployment

MESHERY4357

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
REDIS MASTER DEPLOYMENT
Description
In this design, the Redis master node is configured for high availability and reliability, crucial for applications requiring fast data access and storage. It leverages Kubernetes' capabilities to manage Redis master pods, ensuring fault tolerance through replication and monitoring. This design typically involves setting up persistent storage for data durability, defining resource requests and limits to optimize performance, and configuring appropriate networking for seamless communication within the cluster.
Read moreCaveats and Considerations
Careful consideration is given to security practices, such as access controls and encryption, to safeguard sensitive data stored in Redis. Continuous monitoring and scaling strategies are implemented to maintain optimal performance and availability as workload demands fluctuate.
Read moreTechnologies
Related Patterns
Pod Resource Request
MESHERY4a23
Redis_using_configmap

MESHERY447e

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
REDIS_USING_CONFIGMAP
Description
The "Redis Using ConfigMap" design configures and deploys a Redis instance on Kubernetes using ConfigMaps to manage configuration settings. This design leverages Kubernetes ConfigMaps to store and inject Redis configuration files, allowing for dynamic and centralized management of configuration parameters without altering the Redis container image. By decoupling the configuration from the application, it facilitates easier updates and management of Redis settings, improving maintainability and operational efficiency. This approach is ideal for scenarios where configuration flexibility and quick adjustments are crucial, such as in development, testing, and production environments.
Read moreCaveats and Considerations
While ConfigMaps simplify configuration management, changes to the ConfigMap require a pod restart to take effect. Ensure that updates are carefully planned to avoid unintended downtime.
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
Relationship Master Design

MESHERY43e0

RELATED PATTERNS
Pod Readiness
MESHERY4b83
RELATIONSHIP MASTER DESIGN
Description
A desgin which shows relationship between various K8s resources
Caveats and Considerations
No Caveats
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
Resilient Web App

MESHERY4e64

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
RESILIENT WEB APP
Description
This is a simple app that uses nginx as a web proxy for improving the resiliency of web app
Caveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Robot Shop Sample App

MESHERY4c4e

RELATED PATTERNS
Pod Readiness
MESHERY4b83
ROBOT SHOP SAMPLE APP
Description
Stans Robot Shop is a sample microservice application you can use as a sandbox to test and learn containerised application orchestration and monitoring techniques. It is not intended to be a comprehensive reference example of how to write a microservices application, although you will better understand some of those concepts by playing with Stans Robot Shop. To be clear, the error handling is patchy and there is not any security built into the application.
Read moreCaveats and Considerations
This sample microservice application has been built using these technologies: NodeJS (Express), Java (Spring Boot), Python (Flask), Golang, PHP (Apache), MongoDB, Redis, MySQL (Maxmind data), RabbitMQ, Nginx, AngularJS (1.x)
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
Run DaemonSet on GKE Autopilot

MESHERY4bf8

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
RUN DAEMONSET ON GKE AUTOPILOT
Description
GKE uses the total size of your deployed workloads to determine the size of the nodes that Autopilot provisions for the cluster. If you add or resize a DaemonSet after Autopilot provisions a node, GKE won't resize existing nodes to accommodate the new total workload size. DaemonSets with resource requests larger than the allocatable capacity of existing nodes, after accounting for system pods, also won't get scheduled on those nodes. Starting in GKE version 1.27.6-gke.1248000, clusters in Autopilot mode detect nodes that can't fit all DaemonSets and, over time, migrate workloads to larger nodes that can fit all DaemonSets. This process takes some time, especially if the nodes run system Pods, which need extra time to gracefully terminate so that there's no disruption to core cluster capabilities. In GKE version 1.27.5-gke.200 or earlier, we recommend cordoning and draining nodes that can't accommodate DaemonSet Pods.
Read moreCaveats and Considerations
For all GKE versions, we recommend the following best practices when deploying DaemonSets on Autopilot: Deploy DaemonSets before any other workloads. Set a higher PriorityClass on DaemonSets than regular Pods. The higher PriorityClass lets GKE evict lower-priority Pods to accommodate DaemonSet pods if the node can accommodate those pods. This helps to ensure that the DaemonSet is present on each node without triggering node recreation.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Running ZooKeeper, A Distributed System Coordinator

MESHERY4339

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
RUNNING ZOOKEEPER, A DISTRIBUTED SYSTEM COORDINATOR
Description
This cloud native design defines a Kubernetes configuration for a ZooKeeper deployment. It includes a Service, PodDisruptionBudget, and StatefulSet. It defines a Service named zk-hs with labels indicating it is part of the zk application. It exposes two ports, 2888 and 3888, and has a clusterIP of None meaning it is only accessible within the cluster. The Service selects Pods with the zk label. The next part defines another Service named zk-cs with similar labels and a single port, 2181, used for client connections. It also selects Pods with the zk label. Following that, a PodDisruptionBudget named zk-pdb is defined. It sets the selector to match Pods with the zk label and allows a maximum of 1 Pod to be unavailable during disruptions. Finally, a StatefulSet named zk is defined. It selects Pods with the zk label and uses the zk-hs Service for the headless service. It specifies 3 replicas, a RollingUpdate update strategy, and OrderedReady pod management policy. The Pod template includes affinity rules for pod anti-affinity, resource requests for CPU and memory, container ports for ZooKeeper, a command to start ZooKeeper with specific configurations, and readiness and liveness probes. It also defines a volume claim template for data storage
Read moreCaveats and Considerations
You must have a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB of memory.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
RuntimeClass

MESHERY4c6c

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
RUNTIMECLASS
Description
This pattern establishes and visualizes the relationship between Runtime Class(a Kubernetes component) and other Kubernetes components
Caveats and Considerations
The name of the Runtime Class is referenced by the other Kubernetes Components
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Sample Template - Relationship Diagram

MESHERY4eae

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
SAMPLE TEMPLATE - RELATIONSHIP DIAGRAM
Description
A Beginners guide on creating a Relationship Diagram in Kanvas with a Model Template
Caveats and Considerations
Ensure clarity, avoid overcrowding, and prioritize relationships over excessive detail.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Serve an LLM using multi-host TPUs on GKE

MESHERY4813
SERVE AN LLM USING MULTI-HOST TPUS ON GKE
Description
The "Serve an LLM using multi-host TPUs on GKE" design in Meshmap details the configuration and deployment of a Language Model (LLM) service on Google Kubernetes Engine (GKE) utilizing multi-host Tensor Processing Units (TPUs). This design leverages the high-performance computing capabilities of TPUs to enhance the inference speed and efficiency of the language model. Key aspects of this design include setting up Kubernetes pods with TPU node affinity to ensure the LLM workloads are scheduled on nodes equipped with TPUs. Configuration includes defining resource limits and requests to optimize TPU utilization and ensure stable performance under varying workloads. Integration with Google Cloud's TPU provisioning and monitoring tools enables automated scaling and efficient management of TPUs based on demand. Security measures, such as role-based access controls and encryption, are implemented to safeguard data processed by the LLM.
Read moreCaveats and Considerations
TPUs may not always be available in sufficient quantities or sizes based on demand. This can lead to scalability challenges or delays in provisioning resources for LLM inference tasks.
Technologies
Serve an LLM with multiple GPUs in GKE

MESHERY4d06

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
SERVE AN LLM WITH MULTIPLE GPUS IN GKE
Description
Serve a large language model (LLM) with GPUs in Google Kubernetes Engine (GKE) mode. Create a GKE Standard cluster that uses multiple L4 GPUs and prepares the GKE infrastructure to serve any of the following models: 1. Falcon 40b. 2. Llama 2 70b
Read moreCaveats and Considerations
Depending on the data format of the model, the number of GPUs varies. In this design, each model uses two L4 GPUs.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Service Internal Traffic Policy

MESHERY41b6

SERVICE INTERNAL TRAFFIC POLICY
Description
Service Internal Traffic Policy enables internal traffic restrictions to only route internal traffic to endpoints within the node the traffic originated from. The "internal" traffic here refers to traffic originated from Pods in the current cluster. This can help to reduce costs and improve performance. How it works ?? The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or is not set, Kubernetes considers all endpoints.
Read moreCaveats and Considerations
Note: For pods on nodes with no endpoints for a given Service, the Service behaves as if it has zero endpoints (for Pods on this node) even if the service does have endpoints on other nodes.
Technologies
Serving T5 Large Language Model with TorchServe

MESHERY40e7

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
SERVING T5 LARGE LANGUAGE MODEL WITH TORCHSERVE
Description
Deploy torchserve inference server with prepared T5 model and Client Application. Manifests were tested against GKE Autopilot Kubernetes cluster.
Caveats and Considerations
To configure HPA base on metrics from torchserve you need to: Enable Google Manager Prometheus or install OSS Prometheus. Install Custom Metrics Adapter. Apply pod-monitoring.yaml and hpa.yaml
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Simple Kubernetes Pod

MESHERY4150

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
SIMPLE KUBERNETES POD
Description
test
Caveats and Considerations
test
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Simple Kubernetes Pod

MESHERY4637

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
SIMPLE KUBERNETES POD
Description
test
Caveats and Considerations
test
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Simple Kubernetes Pod

MESHERY4e04

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
SIMPLE KUBERNETES POD
Description
This cloud-native design consists of a Kubernetes Pod running an Nginx container and a Kubernetes Service named service. The Pod uses the image nginx with an image pull policy of Always. The Service defines two ports: one with port 80 and target port 8080, and another with port 80. The Service allows communication between the Pod and external clients on port 80.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between pod and services. Ensure sufficient resources are available in the cluster.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Simple Kubernetes Pod

MESHERY4fc5

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
SIMPLE KUBERNETES POD
Description
This cloud-native design consists of a Kubernetes Pod running an Nginx container and a Kubernetes Service named service. The Pod uses the image nginx with an image pull policy of Always. The Service defines two ports: one with port 80 and target port 8080, and another with port 80. The Service allows communication between the Pod and external clients on port 80.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between pod and services. Ensure sufficient resources are available in the cluster.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Simple Kubernetes Pod

MESHERY454a

RELATED PATTERNS
Minecraft App
MESHERY48dd
SIMPLE KUBERNETES POD
Description
Just an example of how to use a Kubernetes Pod.
Caveats and Considerations
None
Technologies
Related Patterns
Minecraft App
MESHERY48dd
Simple MySQL Pod

MESHERY472e

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
SIMPLE MYSQL POD
Description
Testing patterns
Caveats and Considerations
None
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
TagSet Relationships

MESHERY43cd

RELATED PATTERNS
Minecraft App
MESHERY48dd
TAGSET RELATIONSHIPS
Description
This design is an example of a tagset relationship between two otherwise disparate components.
Caveats and Considerations
This design is for learning purposes. A TagSet Relationship in Meshery refers to the configuration where components are linked based on shared labels. This relationship is essential for identifying and managing groups of Pods, Services, or other resources that share common characteristics. By using matching labels, Kubernetes can efficiently route traffic, apply policies, and manage workloads across components that are part of the same logical grouping.
Read moreTechnologies
Related Patterns
Minecraft App
MESHERY48dd
Thanos Query Design

MESHERY4034

RELATED PATTERNS
Pod Readiness
MESHERY4b83
THANOS QUERY DESIGN
Description
This is sample app for testing k8s deployment and thanos
Caveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
Understanding the difference between Edge Relationships

MESHERY4f72

RELATED PATTERNS
Service Internal Traffic Policy
MESHERY41b6
UNDERSTANDING THE DIFFERENCE BETWEEN EDGE RELATIONSHIPS
Description
This particular design is for training purposes of understanding how relationships work in Meshery.
Caveats and Considerations
Learn the difference between semantically-meaningful relationships and those that are not.
Technologies
Related Patterns
Service Internal Traffic Policy
MESHERY41b6
Untitled Design

MESHERY4135

RELATED PATTERNS
Pod Multi Containers
MESHERY436c
UNTITLED DESIGN
Description
This design involves setting up NGINX using an init container to handle initialization tasks before the main NGINX container starts. The init container is responsible for configuration setup, such as generating or fetching configuration files. The Virtual Host (VHost) configuration allows NGINX to host multiple domains on a single server, each with its own configuration. This setup ensures a clean separation of initialization logic and main server functionality, enhancing modularity and maintainability.
Read moreCaveats and Considerations
1. Init Container Overhead: Using an init container adds a slight delay to the startup process, as it must complete its tasks before the main NGINX container can start.
Technologies
Related Patterns
Pod Multi Containers
MESHERY436c
Untitled Design

MESHERY4186

RELATED PATTERNS
Pod Readiness
MESHERY4b83
UNTITLED DESIGN
Description
Distributed tracing observability platforms, such as Jaeger, are essential for modern software applications that are architected as microservices. Jaeger maps the flow of requests and data as they traverse a distributed system. These requests may make calls to multiple services, which may introduce their own delays or errors. Jaeger connects the dots between these disparate components, helping to identify performance bottlenecks, troubleshoot errors, and improve overall application reliability.
Read moreCaveats and Considerations
technologies used in this design is jaegar for distributed tracing ,sample services ,deployments to show distributed tracing in kubernetes
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
Untitled Design

MESHERY4e3f

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
UNTITLED DESIGN
Description
test
Caveats and Considerations
test
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Vault operator

MESHERY4c0a

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
VAULT OPERATOR
Description
This YAML configuration defines a Kubernetes Deployment for the vault-operator using the apps/v1 API version. It specifies that a single replica of the vault-operator pod should be maintained by Kubernetes. The deployment's metadata sets the name of the deployment to vault-operator. The pod template within the deployment includes metadata labels that tag the pod with name: vault-operator, which helps in identifying and managing the pod. The pod specification details a single container named vault-operator that uses the image quay.io/coreos/vault-operator:latest. This container is configured with two environment variables: MY_POD_NAMESPACE and MY_POD_NAME, which derive their values from the pod's namespace and name respectively using the Kubernetes downward API. This setup ensures that the vault-operator container is aware of its deployment context within the Kubernetes cluster.
Read moreCaveats and Considerations
1. Single Replica: The deployment is configured with a single replica. This might be a single point of failure. Consider increasing the number of replicas for high availability and fault tolerance. 2. Image Tagging: The container image is specified as latest, which can lead to unpredictable deployments because latest may change over time. It's recommended to use a specific version tag to ensure consistency and repeatability in deployments. 3. Environment Variables: The deployment uses environment variables (MY_POD_NAMESPACE and MY_POD_NAME) obtained from the downward API. Ensure these variables are correctly referenced and required by your application. 4. Resource Requests and Limits: The deployment does not specify resource requests and limits for CPU and memory. This could lead to resource contention or overcommitment issues. It’s good practice to define these to ensure predictable performance and resource usage.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
WordPress and MySQL with Persistent Volume on Kubernetes

MESHERY4d8b

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
WORDPRESS AND MYSQL WITH PERSISTENT VOLUME ON KUBERNETES
Description
This design includes a WordPress site and a MySQL database using Minikube. Both applications use PersistentVolumes and PersistentVolumeClaims to store data.
Caveats and Considerations
Warning: This deployment is not suitable for production use cases, as it uses single instance WordPress and MySQL Pods. Consider using WordPress Helm Chart to deploy WordPress in production.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Wordpress Deployment

MESHERY4c81

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
WORDPRESS DEPLOYMENT
Description
This is a sample WordPress deployment.
Caveats and Considerations
No caveats. Feel free to reuse or distrubute.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Wordpress and MySql on Kubernetes

MESHERY4c7e

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
WORDPRESS AND MYSQL ON KUBERNETES
Description
This MeshMap design deploys a scalable and robust WordPress application, backed by a MySQL database, on a Kubernetes cluster. The design leverages Kubernetes resources to ensure high availability, efficient scaling, and ease of management for the WordPress site.
Read moreCaveats and Considerations
1. Ensure that your Kubernetes cluster has sufficient resources (CPU, memory, and storage) to handle the demands of both the WordPress and MySQL pods. 2. Properly set resource requests and limits to avoid resource contention, which could affect performance.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
ZooKeeper Cluster

MESHERY4f53

RELATED PATTERNS
Minecraft App
MESHERY48dd
ZOOKEEPER CLUSTER
Description
This StatefulSet will create three Pods, each running a ZooKeeper server container. The Pods will be named my-zookeeper-cluster-0, my-zookeeper-cluster-1, and my-zookeeper-cluster-2. The volumeMounts section of the spec tells the Pods to mount the PersistentVolumeClaim my-zookeeper-cluster-pvc to the /zookeeper/data directory. This will ensure that the ZooKeeper data is persistent and stored across restarts.
Read moreCaveats and Considerations
1. The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin. 2. Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources. 3. StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service. 4. StatefulSets do not provide any guarantees on the termination of pods when a StatefulSet is deleted. To achieve ordered and graceful termination of the pods in the StatefulSet, it is possible to scale the StatefulSet down to 0 prior to deletion. 5. When using Rolling Updates with the default Pod Management Policy (OrderedReady), it's possible to get into a broken state that requires manual intervention to repair.
Read moreTechnologies
Related Patterns
Minecraft App
MESHERY48dd
[Tutorial] Simple MySQL Pod

MESHERY4920

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
[TUTORIAL] SIMPLE MYSQL POD
Description
This design is used as a starting point for the 'Kubernetes ConfigMaps and Secrets with Meshery' tutorial.
Caveats and Considerations
This is a simple pod that is not managed through a deployment. It does not use persistent storage, service or any other production properties. This should be used for tutorial, demonstration or experimental purposes only.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
api-backend

MESHERY4e4a

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
API-BACKEND
Description
API deployment using kubernetes and it's components
Caveats and Considerations
No, there is no caveats
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
aws-k8s-cni.yaml

MESHERY49cc

RELATED PATTERNS
Minecraft App
MESHERY48dd
AWS-K8S-CNI.YAML
Description
AWS CNI networking integration with Kubernetes
Caveats and Considerations
No caveats
Technologies
Related Patterns
Minecraft App
MESHERY48dd
bookInfoPatternIstio.yaml

MESHERY4377

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
BOOKINFOPATTERNISTIO.YAML
Description
A deployment of book info application through kubernetes, this design uses k8s components like deployment and services to deploy application
Caveats and Considerations
Make sure you are running latest version of k8s
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
default-ns

MESHERY490b

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
DEFAULT-NS
Description
This is a sample default namespace that can be used for testing.
Caveats and Considerations
No caveats. Feel free to reuse.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
doks-nginx-deployment

MESHERY4bf7

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
DOKS-NGINX-DEPLOYMENT
Description
This is a sample design used for exploring kubernetes deployment and service
Caveats and Considerations
No caveats. Free to reuses and distribute
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
ebpf-exporter

MESHERY458f

RELATED PATTERNS
Pod Readiness
MESHERY4b83
EBPF-EXPORTER
Description
Export your eBPF metrics in Prometheus format using ebpf_exporter. eBPF is an enchanment to BPF (Berkeley Packet Filter) and allow custom analysis programs to be executed on Linux tracing tools.
Caveats and Considerations
Prerequisites K8s host running Linux kernel with kernel sources learn more about ebpf exporter from this git repo https://github.com/cloudflare/ebpf_exporter
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
elastic-stack

MESHERY486d

RELATED PATTERNS
Pod Readiness
MESHERY4b83
ELASTIC-STACK
Description
This YAML file deploys an Elasticsearch cluster (version 8.9.0) in Kubernetes, configured with roles including master, data, ingest, ML, and remote cluster client. It sets up a pod disruption budget to maintain high availability and specifies resource requests and limits for CPU, memory, and storage. Persistent storage is configured with a 2Gi volume using the standard-rwo storage class.
Read moreCaveats and Considerations
Version Compatibility: Ensure Elasticsearch 8.9.0 matches your Kubernetes version and other components. Resource Management: Adjust CPU, memory, and storage settings based on your specific needs. Storage Class: Confirm that the standard-rwo storage class is appropriate for your environment. Pod Disruption: Verify the pod disruption budget settings align with your high-availability requirements. Node Selector: Modify the node selector if not using Google Cloud Platform. Testing: Test the configuration in a staging environment before production deployment.
Read moreTechnologies
Related Patterns
Pod Readiness
MESHERY4b83
enhance-flask-app

MESHERY48cd

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
ENHANCE-FLASK-APP
Description
Simple flask app utilizing cloud technology.
Caveats and Considerations
The pods are at a failed status in the Kubernetes dashboard, so that is still an issue.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
fluentd deployment

MESHERY4f28

RELATED PATTERNS
Pod Readiness
MESHERY4b83
FLUENTD DEPLOYMENT
Description
This configuration sets up Fluentd-ES to collect and forward logs from Kubernetes pods to Elasticsearch for storage and analysis. Ensure that Elasticsearch is properly configured and accessible by Fluentd-ES for successful log aggregation and visualization. Additionally, adjust resource requests and limits according to your cluster's capacity and requirements.
Read moreCaveats and Considerations
1. Resource Utilisation: Fluentd can consume significant CPU and memory resources, especially in environments with high log volumes. Monitor resource usage closely and adjust resource requests and limits according to your cluster's capacity and workload requirements. 2. Configuration Complexity: Fluentd's configuration can be complex, particularly when configuring input, filtering, and output plugins. Thoroughly test and validate the Fluentd configuration to ensure it meets your logging requirements and effectively captures relevant log data. 3. Security Considerations: Secure the Fluentd deployment by following best practices for managing secrets and access control. Ensure that sensitive information, such as credentials and configuration details, are properly encrypted and protected.
Read moreTechnologies
Related Patterns
Pod Readiness
MESHERY4b83
fluentd-kubernetes-aws

MESHERY49fa

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
FLUENTD-KUBERNETES-AWS
Description
Fluentd is utilized as a robust log forwarding and aggregation solution, essential for collecting, processing, and forwarding logs from various sources within Kubernetes pods to AWS-based storage or analytics services. This design focuses on integrating Fluentd seamlessly into Kubernetes to enhance observability and troubleshoot application issues effectively. Key considerations include setting up Fluentd DaemonSets to ensure it runs on every node, configuring filters and parsers to handle different log formats, and directing logs to Amazon S3, CloudWatch Logs, or Elasticsearch for storage and analysis. Proper resource allocation, such as CPU and memory requests and limits, is established to optimize Fluentd performance without impacting other applications. Security measures, including role-based access controls and encryption, are implemented to protect sensitive log data.
Read moreCaveats and Considerations
Continuous monitoring and scaling strategies are employed to maintain Fluentd's availability and responsiveness as Kubernetes workloads evolve.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
gitlab runner deployment

MESHERY4170

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
GITLAB RUNNER DEPLOYMENT
Description
This configuration ensures that a single instance of the GitLab Runner is deployed within the gitlab-runner namespace. The GitLab Runner is configured with a specific ServiceAccount, CPU resource requests and limits, and is provided with a ConfigMap containing the configuration file config.toml. The deployment is designed to continuously restart the pod (restartPolicy: Always) to ensure the GitLab Runner remains available for executing jobs.
Read moreCaveats and Considerations
1. Resource Allocation: Ensure that the CPU resource requests and limits specified in the configuration are appropriate for the workload of the GitLab Runner. Monitor resource usage and adjust these values as necessary to prevent resource contention and ensure optimal performance. 2. Image Pull Policy: The configuration specifies imagePullPolicy: Always, which causes Kubernetes to pull the Docker image (gitlab/gitlab-runner:latest) every time the pod is started. While this ensures that the latest image is always used, it may increase deployment time and consume additional network bandwidth. Consider whether this policy aligns with your deployment requirements and constraints. 3. Security: Review the permissions granted to the gitlab-admin ServiceAccount to ensure that it has appropriate access rights within the Kubernetes cluster. Limit the permissions to the minimum required for the GitLab Runner to perform its tasks to reduce the risk of unauthorized access or privilege escalation. 4. ConfigMap Management: Ensure that the gitlab-runner-config ConfigMap referenced in the configuration contains the correct configuration settings for the GitLab Runner. Monitor and manage changes to the ConfigMap to ensure that the GitLab Runner's configuration remains up-to-date and consistent across deployments.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
gke-online-serving-single-gpu

MESHERY481f

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
GKE-ONLINE-SERVING-SINGLE-GPU
Description
This design outlines a Kubernetes architecture tailored for online serving workloads that require GPU acceleration. This design is optimized for Google Kubernetes Engine (GKE), leveraging a single GPU instance to enhance computational performance for machine learning inference, real-time analytics, or other GPU-intensive tasks.
Read moreCaveats and Considerations
Continuous monitoring and optimization of GPU utilization and workload distribution are necessary to maintain optimal performance and avoid resource contention among Pods sharing GPU resources.
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
glasskube-operator

MESHERY4dfd

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
GLASSKUBE-OPERATOR
Description
The Glasskube Operator is an open source Kubernetes operator that aims to simplify the deployment and maintenance of various popular open source tools. Each tool is represented by a new Kubernetes custom resource definition (CRD) and most user-facing configuration parameters are available via that CRD. philosophy is to emphasize ease-of-use and strong defaults over rich configuration. configurations are designed to cover as many use-cases as possible with minimal user configuration.
Read moreCaveats and Considerations
for Caveats And Considerations checkout this docs https://glasskube.eu/docs/getting-started/settings/
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
grafana deployment

MESHERY4f2a

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
GRAFANA DEPLOYMENT
Description
The provided YAML configuration defines a Kubernetes Deployment named "grafana" within the "monitoring" namespace. This Deployment ensures the availability of one instance of Grafana, a monitoring and visualization tool. It specifies resource requirements, including memory and CPU limits, and mounts volumes for persistent storage and configuration. The container runs the latest version of the Grafana image, exposing port 3000 for access. The configuration also includes a Pod template with labels for Pod identification and a selector to match labels for managing Pods.
Read moreCaveats and Considerations
1. Container Image Version: While the configuration uses grafana/grafana:latest for the container image, it's important to note that relying on the latest tag can introduce instability if Grafana publishes a new version that includes breaking changes or bugs. Consider specifying a specific version tag for more predictable behavior. 2. Resource Limits: Resource limits (memory and cpu) are specified for the container. Ensure that these limits are appropriate for your deployment environment and the expected workload of Grafana. Adjust these limits based on performance testing and monitoring. 3. Storage: The configuration uses an emptyDir volume for Grafana's storage. This volume is ephemeral and will be deleted if the Pod restarts or is rescheduled to a different node. Consider using a persistent volume (e.g., PersistentVolumeClaim) for storing Grafana data to ensure data persistence across Pod restarts. 4. Configurations: Configuration for Grafana's data sources is mounted using a ConfigMap. Ensure that the ConfigMap (grafana-datasources) is properly configured with the required data source configurations. Verify that changes to the ConfigMap are propagated to the Grafana Pod without downtime.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
guest_book

MESHERY4b71

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
GUEST_BOOK
Description
A simple guest book application deployment using Kubernetes components like deployment, services and configmap
Caveats and Considerations
Do make sure you change secrets
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
hello-app

MESHERY4089

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
HELLO-APP
Description
An hello app application by using nginx and kubernentes components
Caveats and Considerations
No caveats
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
instana-agent-for-Kubernetes

MESHERY4d3d

RELATED PATTERNS
Pod Readiness
MESHERY4b83
INSTANA-AGENT-FOR-KUBERNETES
Description
instana agent is built for microservices that enables IT Ops to build applications faster and deliver higher quality services by automating monitoring, tracing and root cause analysis. It provides automated observability with AI and the ability to democratize observability, making it accessible to anyone across DevOps, SRE, platform engineering, ITOps and development. Instana gives you 1-second granularity, which helps you quickly detect problems or transactions Additionally, you get 100% traces that allow you to fix issues easily Instana contextualizes data from all sources, including OpenTelemetry, to provide the insights needed to keep up with the pace of change
Read moreCaveats and Considerations
for Caveats And Considerations consider checking this docs https://www.ibm.com/products/instana
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
iscsi

MESHERY48a0

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
ISCSI
Description
Connect Kubernetes clusters to iSCSI devices for scalable storage solutions, supporting direct or multipath connections with CHAP authentication.
Caveats and Considerations
Ensure compatibility of Kubernetes and iSCSI versions, configure network settings appropriately, and monitor performance and scalability of both storage and network infrastructure.
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
istio-ingress-service-web-api-v1-only

MESHERY48d4

ISTIO-INGRESS-SERVICE-WEB-API-V1-ONLY
Description
Requests with the URI prefix kiali are routed to the kiali.istio-system.svc.cluster.local service on port 20001. Requests with URI prefixes like /web-api/v1/getmultiple, /web-api/v1/create, and /web-api/v1/manage are routed to the web-api service with the subset v1. Requests with URI prefixes openapi/ui/ and /openapi are routed to the web-api service on port 9080. Requests with URI prefixes like /loginwithtoken, /login, and /callback are routed to different services, including web-app and authentication. Requests with any other URI prefix are routed to the web-app service on port 80.
Read moreCaveats and Considerations
Ensure Istio control plane is up and running
Technologies
k8s Deployment-2

MESHERY4d32

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
K8S DEPLOYMENT-2
Description
Sample kubernetes deployment
Caveats and Considerations
No caveats
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
knative-service

MESHERY4ce7

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
KNATIVE-SERVICE
Description
This YAML configuration defines a Kubernetes Deployment for a Knative service. This Deployment, named "knative-service," specifies that a container will be created using a specified container image, which should be replaced with the actual image name. The container is configured to listen on port 8080. The Deployment ensures that a single replica of the container is maintained within the "knative-serving" namespace. The Deployment uses labels to identify the pods it manages. Additionally, a Kubernetes Service is defined to expose the Deployment. This Service, named "knative-service," is also created within the "knative-serving" namespace. It uses a selector to match the pods labeled with "app: knative-service" and maps the Service port 80 to the container port 8080, facilitating external access to the deployed application. Furthermore, a Knative Service resource is configured to manage the Knative service. This Knative Service, also named "knative-service" and located in the "knative-serving" namespace, is configured with the same container image and port settings. The Knative Service template includes metadata labels and container specifications, ensuring consistent deployment and management within the Knative environment. This setup allows the Knative service to handle HTTP requests efficiently and leverage Knative's autoscaling capabilities.
Read moreCaveats and Considerations
Image Pull Policy:Ensure the image pull policy is appropriately set, especially if using a custom or private container image. You may need to configure Kubernetes to access private image repositories by setting up image pull secrets. Resource Requests and Limits: Define resource requests and limits for CPU and memory to ensure that the Knative service runs efficiently without exhausting cluster resources. This helps in resource allocation and autoscaling. Namespace Management: Deploying to the knative-serving namespace is typical for Knative components, but for user applications, consider using a separate namespace for better organization and access control. Autoscaling Configuration: Knative supports autoscaling based on metrics like concurrency or CPU usage. Configure autoscaling settings to match your application's load characteristics. Networking and Ingress: Ensure your Knative service is properly exposed via an ingress or gateway if external access is required. Configure DNS settings and TLS for secure access. Monitoring and Logging: Implement monitoring and logging to track the performance and health of your Knative service. Use tools like Prometheus, Grafana, and Elasticsearch for this purpose.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
mTLS-handshake-acceleration-for-Istio

MESHERY4d09

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MTLS-HANDSHAKE-ACCELERATION-FOR-ISTIO
Description
Cryptographic operations are among the most compute-intensive and critical operations when it comes to secured connections. Istio uses Envoy as the “gateways/sidecar” to handle secure connections and intercept the traffic. Depending upon use cases, when an ingress gateway must handle a large number of incoming TLS and secured service-to-service connections through sidecar proxies, the load on Envoy increases. The potential performance depends on many factors, such as size of the cpuset on which Envoy is running, incoming traffic patterns, and key size. These factors can impact Envoy serving many new incoming TLS requests. To achieve performance improvements and accelerated handshakes, a new feature was introduced in Envoy 1.20 and Istio 1.14. It can be achieved with 3rd Gen Intel® Xeon® Scalable processors, the Intel® Integrated Performance Primitives (Intel® IPP) crypto library, CryptoMB Private Key Provider Method support in Envoy, and Private Key Provider configuration in Istio using ProxyConfig.
Read moreCaveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource for custom Intel configuration
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
marblerun

MESHERY44e5

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MARBLERUN
Description
MarbleRun -The control plane for confidential computing. MarbleRun is a framework for deploying distributed confidential computing applications. MarbleRun acts as a confidential operator for your deployment. Think of a trusted party in the control plane. Build your confidential microservices with EGo, Gramine, or similar runtimes, orchestrate them with Kubernetes on an SGX-enabled cluster, and let MarbleRun take care of the rest. Deploy end-to-end secure and verifiable AI pipelines or crunch on sensitive big data in the cloud. Confidential computing at scale has never been easier. MarbleRun simplifies the process by handling much of the groundwork. It ensures that your app's topology adheres to your specified manifest. It verifies the identity and integrity of all your services, bootstraps them, and establishes secure, encrypted communication channels. As your app needs to scale, MarbleRun manages the addition of new instances, ensuring their secure verification.
Read moreCaveats and Considerations
A working SGX DCAP environment is required for MarbleRun. For ease of exploring and testing, we provide a simulation mode with --simulation that runs without SGX hardware. Depending on your setup, you may follow the quickstart for SGX-enabled clusters. Alternatively, if your setup doesn't support SGX, you can follow the quickstart in simulation mode by selecting the respective tabs. For getting more context on consideration and caveats ,get into this docs of https://docs.edgeless.systems/marblerun/getting-started/quickstart
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
mattermost operator

MESHERY4eab

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MATTERMOST OPERATOR
Description
This YAML file defines a Kubernetes Deployment for the mattermost-operator in the mattermost-operator namespace. The deployment is configured to run a single replica of the Mattermost operator, which manages Mattermost instances within the Kubernetes cluster. The pod template specifies the container details for the operator. The container, named mattermost-operator, uses the image mattermost/mattermost-operator:latest and is set to pull the image if it is not already present (IfNotPresent). The container runs the /mattermost-operator command with arguments to enable leader election and set the metrics address to 0.0.0.0:8383. Several environment variables are defined to configure the operator's behaviour, such as MAX_RECONCILING_INSTALLATIONS (set to 20), REQUEUE_ON_LIMIT_DELAY (set to 20 seconds), and MAX_RECONCILE_CONCURRENCY (set to 10). These settings control how the operator handles the reconciliation process for Mattermost installations. The container also exposes a port (8383) for metrics, allowing monitoring and observation of the operator's performance. The deployment specifies that the pods should use the mattermost-operator service account, ensuring they have the appropriate permissions to interact with the Kubernetes API and manage Mattermost resources.
Read moreCaveats and Considerations
1. Resource Allocation: The deployment specifies no resource limits or requests for the mattermost-operator container. It is crucial to define these to ensure the operator has sufficient CPU and memory to function correctly without affecting other workloads in the cluster. 2. Image Tag: The latest tag is used for the Mattermost operator image. This practice can lead to unpredictability in deployments, as the latest tag may change and introduce unexpected changes or issues. It is recommended to use a specific version tag to ensure consistency. 3. Security Context: The deployment does not specify a detailed security context for the container. Adding constraints such as runAsNonRoot, readOnlyRootFilesystem, and dropCapabilities can enhance security by limiting the container’s privileges. 4. Environment Variables: The environment variables like MAX_RECONCILING_INSTALLATIONS, REQUEUE_ON_LIMIT_DELAY, and MAX_RECONCILE_CONCURRENCY are set directly in the deployment. If these values need to be adjusted frequently, consider using a ConfigMap to manage them externally. 5. Metrics and Monitoring: The metrics address is exposed on port 8383. Ensure that appropriate monitoring tools are in place to capture and analyse these metrics for performance tuning and troubleshooting.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
meshery-cilium-deployment

MESHERY4267

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MESHERY-CILIUM-DEPLOYMENT
Description
This is sample app for testing k8s deployment and cilium
Caveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource for custom Intel configuration
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
minIO Deployment

MESHERY4c90

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MINIO DEPLOYMENT
Description
This configuration sets up a single MinIO instance with specific environment variables, health checks, and life cycle actions, utilising a PersistentVolumeClaim for data storage within a Kubernetes cluster. It ensures that MinIO is deployed and managed according to the specified parameters.
Read moreCaveats and Considerations
1. Replication and High Availability: The configuration specifies only one replica (replicas: For production environments requiring high availability and fault tolerance, consider increasing the number of replicas and configuring MinIO for distributed mode to ensure data redundancy and availability. 2. Security Considerations: The provided configuration includes hard-coded access and secret keys (MINIO_ACCESS_KEY and MINIO_SECRET_KEY) within the YAML file. It is crucial to follow best practices for secret management in Kubernetes, such as using Kubernetes Secrets or external secret management solutions, to securely manage sensitive information. 3. Resource Requirements: Resource requests and limits for CPU, memory, and storage are not defined in the configuration. Assess and adjust these resource specifications according to the expected workload and performance requirements to ensure optimal resource utilisation and avoid resource contention. 4. Storage Provisioning: The configuration relies on a PersistentVolumeClaim (PVC) named minio to provide storage for MinIO. Ensure that the underlying storage provisioner and PersistentVolume (PV) configuration meet the performance, capacity, and durability requirements of the MinIO workload.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
mongoDB-Sample-app

MESHERY457b

RELATED PATTERNS
Minecraft App
MESHERY48dd
MONGODB-SAMPLE-APP
Description
This design contains a very simple application that you can use to test your MongoDB Deployment. This application requires a MongoDB resource deployed with one of the MongoDB Operators.
Caveats and Considerations
make sure to use mongodb operator deployed with this sample app and make sure to use own custom secrets to connect mongodb
Technologies
Related Patterns
Minecraft App
MESHERY48dd
my first app design

MESHERY46a2

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MY FIRST APP DESIGN
Description
This infrastructure design defines two services within a system: 1. **Customer Service**: - Type: Customer - Version: 0.0.50 - Model: Jira Service Desk Operator - Attributes: This service is configured with specific settings, including an email address, legacy customer mode, and a name. It is categorized as a tool within the system.2. **Notebook Service**: - Type: Notebook - Version: 1.6.1 - Model: Kubeflow - Attributes: This service is categorized as a machine learning tool. It has metadata related to its source URI and appearance. These services are components within a larger system or design, each serving a distinct purpose. The Customer Service is associated with customer-related operations, while the Notebook Service is related to machine learning tasks.
Read moreCaveats and Considerations
Make sure to use correct credentials for Jira service operator
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
my-sql-with-cinder-vol-plugin

MESHERY40de

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MY-SQL-WITH-CINDER-VOL-PLUGIN
Description
Cinder is a Block Storage service for OpenStack. This example shows how it can be used as an attachment mounted to a pod in Kubernetes. Start kubelet with cloud provider as openstack with a valid cloud config Sample cloud_config [Global] auth-url=https://os-identity.vip.foo.bar.com:5443/v2.0 username=user password=pass region=region1 tenant-id=0c331a1df18571594d49fe68asa4e Create a cinder volume Ex cinder create --display-name=test-repo 2Use the id of the cinder volume created to create a pod definition Create a new pod with the definition cluster/kubectl.sh create -f examples/mysql-cinder-pd/mysql.yaml This should now 1. Attach the specified volume to the kubelet's host machine\\
2. Format the volume if required (only if the volume specified is not already formatted to the fstype specified) 3. Mount it on the kubelet's host machine 4. Spin up a container with this volume mounted to the path specified in the pod definition
Caveats and Considerations
Currently the cinder volume plugin is designed to work only on linux hosts and offers ext4 and ext3 as supported fs types Make sure that kubelet host machine has the following executables.\\
Ensure cinder is installed and configured properly in the region in which kubelet is spun up
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
mysql operator

MESHERY4367

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MYSQL OPERATOR
Description
This YAML file defines a Kubernetes Deployment for the mysql-operator in the mysql-operator namespace. The deployment specifies a single replica of the operator to manage MySQL instances within the cluster. The operator container uses the image container-registry.oracle.com/mysql/community-operator:8.4.0-2.1.3 and runs the mysqlsh command with specific arguments for the MySQL operator.
Read moreCaveats and Considerations
1. Single Replica: Running a single replica of the operator can be a single point of failure. Consider increasing the number of replicas for high availability if supported. 2. Image Version: The image version 8.4.0-2.1.3 is specified, ensuring consistent deployments. Be mindful of updating this version in accordance with operator updates and testing compatibility. 3. Security Context: The security context is configured to run as a non-root user (runAsUser: 2), with no privilege escalation (allowPrivilegeEscalation: false), and a read-only root filesystem (readOnlyRootFilesystem: true). This enhances the security posture of the deployment. 4. Environment Variables: Sensitive information should be handled securely. Environment variables such as credentials should be managed using Kubernetes Secrets if necessary. 5. Readiness Probe: The readiness probe uses a file-based check, which is simple but ensure that the mechanism creating the /tmp/mysql-operator-ready file is reliable.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
nginx ingress

MESHERY4d83

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
NGINX INGRESS
Description
Creates a Kubernetes deployment with two replicas running NGINX containers and a service to expose these pods internally within the Kubernetes cluster. The NGINX containers are configured to listen on port 80, and the service routes traffic to these containers.
Read moreCaveats and Considerations
ImagePullPolicy: In the Deployment spec, the imagePullPolicy is set to Never. This means that Kubernetes will never attempt to pull the NGINX image from a container registry, assuming it's already present on the node where the pod is scheduled. This can be problematic if the image is not present or if you need to update to a newer version. Consider setting the imagePullPolicy to Always or IfNotPresent depending on your deployment requirements. Resource Allocation: The provided manifest doesn't specify resource requests and limits for the NGINX container. Without resource limits, the container can consume excessive resources, impacting other workloads on the same node. It's recommended to define resource requests and limits based on the expected workload characteristics to ensure stability and resource efficiency.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
nginx-deployment

MESHERY4817

NGINX-DEPLOYMENT
Description
Simple application deployment with nginx
Caveats and Considerations
No caveats
Technologies
node-feature-discovery

MESHERY4481

RELATED PATTERNS
Minecraft App
MESHERY48dd
NODE-FEATURE-DISCOVERY
Description
Node Feature Discovery (NFD) is a Kubernetes add-on for detecting hardware features and system configuration. Detected features are advertised as node labels. NFD provides flexible configuration and extension points for a wide range of vendor and application specific node labeling needs.
Read moreCaveats and Considerations
Checkout this docs for Caveats And Considerations https://kubernetes-sigs.github.io/node-feature-discovery/v0.16/get-started/introduction.html
Technologies
Related Patterns
Minecraft App
MESHERY48dd
postgreSQL cluster

MESHERY4d4f

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
POSTGRESQL CLUSTER
Description
This YAML configuration defines a PostgreSQL cluster deployment tailored for Google Kubernetes Engine (GKE) utilizing the Cloud Native PostgreSQL (CNPG) operator. The cluster, named "gke-pg-cluster," is designed to offer a standard PostgreSQL environment, featuring three instances for redundancy and high availability. Each instance is provisioned with 2Gi of premium storage, ensuring robust data persistence. Resource allocations are specified, with each instance requesting 1Gi of memory and 1000m (milliCPU) of CPU, and limits set to the same values. Additionally, the cluster is configured with pod anti-affinity, promoting distribution across nodes for fault tolerance. Host-based authentication is enabled for security, permitting access from IP range 10.48.0.0/20 using the "md5" method. Monitoring capabilities are integrated, facilitated by enabling pod monitoring. The configuration also includes tolerations and additional pod affinity rules, enhancing scheduling flexibility and optimizing resource utilization within the Kubernetes environment. This deployment exemplifies a robust and scalable PostgreSQL infrastructure optimized for cloud-native environments, aligning with best practices for reliability, performance, and security.
Read moreCaveats and Considerations
1. Resource Requirements: The specified resource requests and limits (memory and CPU) should be carefully evaluated to ensure they align with the expected workload demands. Adjustments may be necessary based on actual usage patterns and performance requirements. 2. Storage Class: The choice of storage class ("premium-rwo" in this case) should be reviewed to ensure it meets performance, availability, and cost requirements. Depending on the workload characteristics, other storage classes may be more suitable. 3. Networking Configuration: The configured host-based authentication rules may need adjustment based on the network environment and security policies in place. Ensure that only authorized entities have access to the PostgreSQL cluster.
Read moreTechnologies
Related Patterns
Example Labels and Annotations
MESHERY4649
prometheus-opencost-exporter

MESHERY4616

RELATED PATTERNS
Pod Readiness
MESHERY4b83
PROMETHEUS-OPENCOST-EXPORTER
Description
Prometheus exporter for OpenCost Kubernetes cost monitoring data. This design bootstraps a Prometheus OpenCost Exporter deployment on a Kuberentes cluster using the meshery playground . OpenCost is a vendor-neutral open source project for measuring and allocating cloud infrastructure and container costs in real time. Built by Kubernetes experts and supported by Kubernetes practitioners, OpenCost shines a light into the black box of Kubernetes spend.
Read moreCaveats and Considerations
Set the PROMETHEUS_SERVER_ENDPOINT environment variable to the address of your Prometheus server. Add the scrapeConfig to it, using the preferred means for your Prometheus install (ie. -f https://raw.githubusercontent.com/opencost/opencost/develop/kubernetes/prometheus/extraScrapeConfigs.yaml). Consider using the OpenCost Helm Chart for additional Prometheus configuration options. for more information refer this docs https://www.opencost.io/docs/installation/prometheus
Read moreTechnologies
Related Patterns
Pod Readiness
MESHERY4b83
prometheus-operator-crd-cluster-roles

MESHERY4571
PROMETHEUS-OPERATOR-CRD-CLUSTER-ROLES
Description
prometheus operator crd cluster roles
Caveats and Considerations
prometheus operator crd cluster roles
Technologies
prometheus-postgres-exporter

MESHERY4dd9

RELATED PATTERNS
Pod Readiness
MESHERY4b83
PROMETHEUS-POSTGRES-EXPORTER
Description
This design enables seamless integration with Prometheus' robust ecosystem of visualization and alerting tools, empowering teams to monitor database health, query performance, resource utilization, and other critical metrics.
Caveats and Considerations
No caveats
Technologies
Related Patterns
Pod Readiness
MESHERY4b83
prometheus-versus-3

MESHERY48bb

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
PROMETHEUS-VERSUS-3
Description
This is a simple prometheus montioring design
Caveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
prometheus.yaml

MESHERY46c3

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
PROMETHEUS.YAML
Description
prometheus
Caveats and Considerations
prometheus
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
prometheus_kubernetes

MESHERY4a71

RELATED PATTERNS
Pod Readiness
MESHERY4b83
PROMETHEUS_KUBERNETES
Description
This outlines a configuration for deploying Prometheus to monitor Kubernetes clusters effectively. It focuses on setting up Prometheus to collect and store metrics from various Kubernetes components such as nodes, pods, and services.
Caveats and Considerations
Prometheus can consume significant CPU and memory resources, especially when monitoring large-scale Kubernetes clusters with numerous pods and services. Careful resource allocation and monitoring are essential to prevent performance degradation or cluster instability.
Read moreTechnologies
Related Patterns
Pod Readiness
MESHERY4b83
rabbitmq-cluster-operator

MESHERY4f09

RELATED PATTERNS
Pod Resource Request
MESHERY4a23
RABBITMQ-CLUSTER-OPERATOR
Description
The RabbitMQ Cluster Operator design leverages Kubernetes to automate the deployment and management of RabbitMQ clusters, a widely-used open-source message broker. This design facilitates streamlined operations and enables developers to focus on application logic, leveraging RabbitMQ's robust messaging capabilities for building resilient and scalable distributed systems
Read moreCaveats and Considerations
The RabbitMQ Cluster Operator itself requires maintenance and updates to stay aligned with RabbitMQ and Kubernetes versions. Keeping the operator up to date with the latest patches and features is essential for stability and security.
Technologies
Related Patterns
Pod Resource Request
MESHERY4a23
replication controller

MESHERY4849

RELATED PATTERNS
Minecraft App
MESHERY48dd
REPLICATION CONTROLLER
Description
A ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available. If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, are deleted, or are terminated. For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade. For this reason, you should use a ReplicationController even if your application requires only a single pod. A ReplicationController is similar to a process supervisor, but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods across multiple nodes.
Read moreCaveats and Considerations
This example ReplicationController config runs three copies of the nginx web server. u can add deployments , config maps , services to this design as per requirements .
Technologies
Related Patterns
Minecraft App
MESHERY48dd
senthil-app-new

MESHERY4120

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
SENTHIL-APP-NEW
Description
A simple kubernetes cluster with pod which contains an ngnix container
Caveats and Considerations
A simple kubernetes cluster with pod which contains an ngnix container
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
test design

MESHERY4ccd

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
TEST DESIGN
Description
Test
Caveats and Considerations
Test
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
the-new-stack

MESHERY4705

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
THE-NEW-STACK
Description
The New Stack (TNS) is a simple three-tier demo application, fully instrumented with the 3 pillars of observability: metrics, logs, and traces. It offers an insight on what a modern observability stack looks like and experience what it's like to pivot among different types of observability data. The TNS app is an example three-tier web app built by Weaveworks. It consists of a data layer, application logic layer, and load-balancing layer. To learn more about it, see How To Detect, Map and Monitor Docker Containers with Weave Scope from Weaveworks. The instrumentation for the TNS app is as follows: Metrics: Each tier of the TNS app exposes metrics on /metrics endpoints, which are scraped by the Grafana Agent. Additionally, these metrics are tagged with exemplar information. The Grafana Agent then writes these metrics to Mimir for storage.Logs: Each tier of the TNS app writes logs to standard output or standard error. It is captured by Kubernetes, which are then collected by the Grafana Agent. Finally, the Agent forwards them to Loki for storage. Traces: Each tier of the TNS app sends traces in Jaeger format to the Grafana Agent, which then converts them to OTel format and forwards them to Tempo for storage. Visualization: A Grafana instance configured to talk to the Mimir, Loki, and Tempo instances makes it possible to query and visualize the metrics, logs, and traces data.
Read moreCaveats and Considerations
Ensure enough resources are available on the k8s cluster
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
uptime-kuma

MESHERY422a

RELATED PATTERNS
Pod Readiness
MESHERY4b83
UPTIME-KUMA
Description
Uptime Kuma is an easy-to-use self-hosted monitoring tool. Features- Monitoring uptime for HTTP(s) / TCP / HTTP(s) Keyword / HTTP(s) Json Query / Ping / DNS Record / Push / Steam Game Server / Docker Containers Fancy, Reactive, Fast UI/UX Notifications via Telegram, Discord, Gotify, Slack, Pushover, Email (SMTP), and 90+ notification services, click here for the full list 20-second intervals Multi Languages Multiple status pages Map status pages to specific domains Ping chart Certificate info Proxy support 2FA support
Read moreCaveats and Considerations
To try live demo of uptime kuma . use this link to try out https://demo.kuma.pet/start-demo It is a temporary live demo, all data will be deleted after 10 minutes. for Caveats And Considerations check out this github repo https://github.com/louislam/uptime-kuma
Read moreTechnologies
Related Patterns
Pod Readiness
MESHERY4b83
voting_app

MESHERY49d5

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
VOTING_APP
Description
A deployment of voting app in a Kubernetes environment, the "voting_app" design leverages container orchestration to ensure scalability, reliability, and ease of deployment.
Caveats and Considerations
No caveats
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
webserver

MESHERY457a

RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
WEBSERVER
Description
This designs runs a simple python webserver at port 8000. It also containers k8s service which connects to the deployment
Caveats and Considerations
Ensure port are not pre-occupied.
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
APIClarity Trace Exporter


RELATED PATTERNS
my-http_auth1 (Copy)
MESHERY4769
APICLARITY TRACE EXPORTER
What this filter does
""
...read moreCaveats and Considerations
""
Technologies
Related Patterns
my-http_auth1 (Copy)
MESHERY4769
Authentication Filter


RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
AUTHENTICATION FILTER
What this filter does
""
...read moreCaveats and Considerations
""
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Ngnix depl filter


RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
NGNIX DEPL FILTER
What this filter does
""
...read moreCaveats and Considerations
""
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
Test Filter


RELATED PATTERNS
auth2
MESHERY47f2
TEST FILTER
What this filter does
""
...read moreCaveats and Considerations
""
Technologies
Related Patterns
auth2
MESHERY47f2
auth2


RELATED PATTERNS
Test Filter
MESHERY4de9
AUTH2
What this filter does
""
...read moreCaveats and Considerations
""
Technologies
Related Patterns
Test Filter
MESHERY4de9
meshery-filter-idclu


RELATED PATTERNS
auth2
MESHERY47f2
MESHERY-FILTER-IDCLU
What this filter does
""
...read moreCaveats and Considerations
""
Technologies
Related Patterns
auth2
MESHERY47f2
meshery-filter-iuhlv


RELATED PATTERNS
auth2
MESHERY47f2
MESHERY-FILTER-IUHLV
What this filter does
""
...read moreCaveats and Considerations
""
Technologies
Related Patterns
auth2
MESHERY47f2
meshery-filter-vykgz


RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MESHERY-FILTER-VYKGZ
What this filter does
""
...read moreCaveats and Considerations
""
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
my-http_auth1


RELATED PATTERNS
Example Labels and Annotations
MESHERY4649
MY-HTTP_AUTH1
What this filter does
""
...read moreCaveats and Considerations
""
Technologies
Related Patterns
Example Labels and Annotations
MESHERY4649
my-http_auth1 (Copy)


RELATED PATTERNS
APIClarity Trace Exporter
MESHERY4370
MY-HTTP_AUTH1 (COPY)
What this filter does
""
...read moreCaveats and Considerations
""
Technologies
Related Patterns
APIClarity Trace Exporter
MESHERY4370
Using Envoy metrics
Coming Soon...
