代做CS 548—Fall 2025 Enterprise Software Architecture and Design Assignment Ten—Kubernetes in the Clou
- 首页 >> C/C++编程CS 548—Fall 2025
Enterprise Software Architecture and Design
Assignment Ten—Kubernetes in the Cloud
In this assignment, you will deploy the microservices that you developed in previous assignments into a Kubernetes cluster in the cloud. We will use the Elastic Kubernetes Services (EKS) provided by Amazon Web Services. We will base the guidelines for using EKS on the quickstart guide, specifically that using kubectl and eksctl, and we will use EC2 to deploy the worker nodes in the cluster.
Step 1: Install tools
Download and install AWS kubectl on your laptop. If you have installed Docker Desktop, this will have installed its own version of kubectl. You will need to make sure that this version of kubectl differs by no more than one minor version from the current default version of the Kubernetes control plane in EKS (Currently 1.30). Find out the client version as follows:
$ kubectl version --client
If necessary, upgrade your version of kubectl following the instructions at Amazon.
Download and install eksctl on your laptop. This tool will allow you to manage your Kubernetes cluster in EKS from your laptop.
Download and install the AWS command line interface (CLI) on your laptop. You will need this to configure the IAM identity you will use to authenticate to AWS with eksctl and kubectl.
Step 2: Create IAM User for EKS
To use kubectl from your laptop, you will need to authenticate to AWS. You should never use a root user for this purpose. Instead, following the principle of least privilege, create an IAM user with a policy that restricts their access:
You will need to set permissions for this user. The following summarizes the policies that are required:
In addition to the two AWS-managed policies, that you cannot edit, you need to add three policies that you should customize with your AWS account id. First, there is the policy
EksAllAccess:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "eks:*",
"Resource": "*"
},
{
"Action": [
"ssm:GetParameter",
"ssm:GetParameters"
],
"Resource": [
"arn:aws:ssm:*:<account_id>:parameter/aws/*",
"arn:aws:ssm:*::parameter/aws/*"
],
"Effect": "Allow"
},
{
"Action": [
"kms:CreateGrant",
"kms:DescribeKey"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"logs:PutRetentionPolicy"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
Second, there is the policy IamLimitedAccess:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:CreateInstanceProfile",
"iam:DeleteInstanceProfile",
"iam:GetInstanceProfile",
"iam:RemoveRoleFromInstanceProfile",
"iam:GetRole",
"iam:CreateRole",
"iam:DeleteRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:ListInstanceProfiles",
"iam:AddRoleToInstanceProfile",
"iam:ListInstanceProfilesForRole",
"iam:PassRole",
"iam:DetachRolePolicy",
"iam:DeleteRolePolicy",
"iam:GetRolePolicy",
"iam:GetOpenIDConnectProvider",
"iam:CreateOpenIDConnectProvider",
"iam:DeleteOpenIDConnectProvider",
"iam:TagOpenIDConnectProvider",
"iam:ListAttachedRolePolicies",
"iam:TagRole",
"iam:GetPolicy",
"iam:CreatePolicy",
"iam:DeletePolicy",
"iam:ListPolicyVersions"
],
"Resource": [
"arn:aws:iam::<account_id>:instance-profile/eksctl-*",
"arn:aws:iam::<account_id>:role/eksctl-*",
"arn:aws:iam::<account_id>:policy/eksctl-*",
"arn:aws:iam::<account_id>:oidc-provider/*",
"arn:aws:iam::<account_id>:role/aws-service-role/eks-
nodegroup.amazonaws.com/AWSServiceRoleForAmazonEKSNodegroup",
"arn:aws:iam::<account_id>:role/eksctl-managed-*"
]
},
{
"Effect": "Allow",
"Action": [
"iam:GetRole"
],
"Resource": [
"arn:aws:iam::<account_id>:role/*"
]
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": [
"eks.amazonaws.com",
"eks-nodegroup.amazonaws.com",
"eks-fargate.amazonaws.com"
]
}
}
}
]
}
You also need to provide access for Elastic Container Registry, call this EcrAccess:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ecr:PutImageTagMutability",
"ecr:StartImageScan",
"ecr:DescribeImageReplicationStatus",
"ecr:ListTagsForResource",
"ecr:UploadLayerPart",
"ecr:BatchDeleteImage",
"ecr:ListImages",
"ecr:BatchGetRepositoryScanningConfiguration",
"ecr:DeleteRepository",
"ecr:CompleteLayerUpload",
"ecr:TagResource",
"ecr:DescribeRepositories",
"ecr:BatchCheckLayerAvailability",
"ecr:ReplicateImage",
"ecr:GetLifecyclePolicy",
"ecr:PutLifecyclePolicy",
"ecr:DescribeImageScanFindings",
"ecr:GetLifecyclePolicyPreview",
"ecr:PutImageScanningConfiguration",
"ecr:GetDownloadUrlForLayer",
"ecr:DeleteLifecyclePolicy",
"ecr:PutImage",
"ecr:UntagResource",
"ecr:BatchGetImage",
"ecr:DescribeImages",
"ecr:StartLifecyclePolicyPreview",
"ecr:InitiateLayerUpload",
"ecr:GetRepositoryPolicy"
],
"Resource": "arn:aws:ecr:*:<account_id>:repository/*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"ecr:GetRegistryPolicy",
"ecr:BatchImportUpstreamImage",
"ecr:CreateRepository",
"ecr:DescribeRegistry",
"ecr:DescribePullThroughCacheRules",
"ecr:GetAuthorizationToken",
"ecr:PutRegistryScanningConfiguration",
"ecr:CreatePullThroughCacheRule",
"ecr:DeletePullThroughCacheRule",
"ecr:GetRegistryScanningConfiguration",
"ecr:PutReplicationConfiguration"
],
"Resource": "*"
}
]
}
Finish creating the IAM user with these policies. Now create a secret access key that will allow you to programmatically access the EKS cluster. Download the csv file that includes the secret access key, and carefully save it where no-one else can get access to it.
Now that you have created an IAM user with a secret access key and restricted permissions, you should configure the AWS CLI on your own computer with the credentials you have created for the IAM user cs548eks:
$ aws configure
You should save your access ID and secret access key for the IAM user you created when you are prompted, as well as the default region (e.g., us-east-2) and output format (json). Verify that you have configured your credentials to authenticate as cs548eks:
$ aws sts get-caller-identity
Step 3: Create an EKS Cluster
Use the eksctl CLI to create your Kubernetes cluster in EKS: $ eksctl create cluster --name=cluster-name
This will create a cluster of managed nodes, with a public and private subnet, in your default region. It will be replicated across several availability zones in your region. For debugging purposes, you can restrain the managed nodes to a single availability zone:
$ eksctl create cluster --name=cluster-name --zones=us-east-2a,us-east-2b --
node-zones=us-east-2a
By default, eksctl will use m5.large EC2 instances, which can be expensive (Remember
that the idea is to scale up the number of pods on running worker nodes). You can specify
criteria for choosing an instance and limit the number of worker nodes that are created, and you can use the --dry-run option to see what the actual configuration of the cluster will
look like:
$ eksctl create cluster --name=cluster-name --zones=us-east-2a,us-east-2b --
node-zones=us-east-2a --instance-selector-vcpus=2 --instance-selector-memory=4
--nodes=1 --dry-run
When you are finished with the cluster, delete it as follows: $ eksctl delete cluster --name=cluster-name
You can view the resources created for a cluster in the CloudFormation console. Creating a cluster may take some time, sometimes up to twenty minutes, so you should review
creation and deletion in the CloudFormation console. Make sure that you select the
appropriate region in the AWS console (e.g., Ohio for us-east-2). You can view the cluster nodes and the workloads deployed in the cluster using these commands:
$ kubectl get nodes -o wide
$ kubectl get pods -A -o wide
By default, the cluster is open to public access. As with your EC2 instance, you will want to restrict access to your IP address or the Stevens network IP address range. If you do restrict public access, then you should allow private access within the Amazon network so that pods in the cluster can communicate among themselves. Use the AWS CLI to add this restriction, replacing the IP address below with your actual IP address:
$ aws eks update-cluster-config
--name cluster-name
--resources-vpc-config endpointPublicAccess=true,publicAccessCidrs="your-
ip-address/32",endpointPrivateAccess=true
You should execute the kubectl commands again to confirm that you still have access to the cluster. Optionally execute the following command to enable control plane logging in your cluster:
$ aws eks update-cluster-config
--name cluster-name
--logging
'{"clusterLogging":[{"types":["api","audit","authenticator","controllerManager "],"enabled":true}]}'
You can view the logs at the AWS CloudWatch console.
