AWS





https://aws.amazon.com/ec2/instance-types/
https://aws.amazon.com/ebs/details/


Storage :

S3  : For low latency or frequent access to your data. Used for static web hosting.

Store your object across multiple facilities before returning SUCCESS

Versioning :
To permanently delete a version of an object requires your AWS account credentials and a valid six-digit code and serial number from an authentication device in your physical possession.
Archives all your previous versions to the lower-cost Glacier storage class and deletes them

Event Notification :  set up triggers to perform actions or send notifications can be sent in response to actions in Amazon S3 like PUTs, POSTs, COPYs, or DELETEs.

Object tags can also be used to label objects that belong to a specific project or business unit and can be replicated across regions using

S3 CloudWatch Metrics generate metrics for S3 bucket or configure filters for the metrics

CloudWatch can Fire an action when the threshold is reached on storage metrics counts, timers, or rates


Controlling access to Amazon S3 resources: 

IAM policies : Manages multiple users under a single AWS account.
Bucket policies : restrict access specific Amazon VPC Endpoint or a set of endpoints
Access Control Lists (ACLs) : grant specific permissions for to specific users for an individual bucket or object
Query string authentication : create a URL to an Amazon S3 object which is only valid for a limited time


S3 Analytics :  Provides daily visualizations of your storage usage

S3 Inventory :
 Provides a CSV (Comma Separated Values) flat-file output of your objects and their corresponding metadata on a daily or weekly basis for an S3 bucket.


Lifecycle Management : Automatically migrate S3 objects to Standard - Infrequent Access and/or Amazon Glacier based on the age of the data.

Cross-Region Replication (CRR) using Management Console, the REST API, the AWS CLI, or the AWS SDKs.Versioning must be turned on for both the source and destination buckets to enable CRR
Existing data in the bucket prior to enabling CRR is not replicated.

S3 Transfer Acceleration
Enables fast, easy, and secure transfers of files over long distances
Enabled through Management Console, the REST API, the AWS CLI, or the AWS SDKs
First data arrives in AWS Edge Location.No data is ever saved at AWS Edge Locations.
uploading to a centralized bucket from geographically dispersed locations.
Use if more than 25Mbps of bandwidth available




Amazon EBS


EBS volumes   : Data persist independently outside of the life of the instance ; set the Delete on termination flag to "No"
                Each newly created volume get a unique 256-bit AES key

EBS Magnetic volumes  : Old name for EBS Standard Volumes


SSD-backed volumes :  For transactional, IOPS-intensive database workloads ; default volume type

Provisioned IOPS SSD (io1) ;  Available for all Amazon EC2 Instance Types
General Purpose SSD (gp2)  ;


HDD-backed volumes : For throughput-intensive and big-data workloads, large I/O sizes, and sequential I/O patterns

Throughput Optimized HDD (st1) and
Cold HDD (sc1) ;


Volume’s queue depth : Number of pending I/O requests from your application to your volume.
                       For every 500 provisioned IOPS in a minute, queue depth should be 1.So far 1500 IOPS must be 3.
                       For every 1 MB sequential I/O, average queue depth must be 4 or more

Snapshots :  Only available through the Amazon EC2 ; Public and Private Snapshots ; only captures data written to Amazon EBS volume
             For consistent snapshot ,detach the volume cleanly, issuing the snapshot and reattach




Amazon EFS

Storage capacity is elastic, growing and shrinking automatically as you add and remove files
Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time as well as by on-primise.
Stored and accessed across multiple Availability Zones.
Speed 50 MB/s per TB to 100 MB/s

Can be mounted on on-premises servers using AWS Direct Connect or
can be  mounted on an Amazon EC2 instance using endpoints/mount targets in VPC.So can be directly accessed
can be  mounted outside VPC using ClassicLink.


IAm can be used to control access to files and directories
Have globally unique id.

By default, you can create up to 10 file systems per AWS account per region






Amazon Glacier  : data archiving and long-term backup. $0.004 per gigabyte per month ; synchronously stores your data across multiple facilities
You can retrieve 10 GB of your Amazon Glacier data per month for free
Deleting data from Amazon Glacier is free if the archive being deleted has been stored for three months or longer

Archive can represent a single file or you may choose to combine several files to be uploaded as a single archive.Each archive is assigned a unique archive ID that can later be used to retrieve the data.

Individual archives are limited to a maximum size of 40 terabytes.
Archives that are deleted within 3 months of being uploaded will be charged a deletion fee

Vault is a way to group archives together in Amazon Glacier.


Vault access policies is resource-based where you would attach an access policy directly on a vault to govern access to all users.

You can set one vault access policy for each vault.


Vault Lock policy to deploy regulatory and compliance controls that are typically restrictive and are “set and forget” in nature.


You can limit retrievals to “Free Tier Only”, or if you wish to retrieve more than the free tier, you can specify a “Max Retrieval Rate” to limit your retrieval speed and establish a retrieval cost ceiling.


Data Retrieval is Standard , Bulk retrievals or Expedited

  • Standard(default , 3 – 5 hrs)
  • Bulk retrievals(5 – 12 hrs)
  • Expedited    : For occasional urgent requests for a subset of archives (within 1 – 5 minutes)
  • On-Demand    : Available the vast majority of the time
  • Provisioned  : For specific guaranteed Expedited retrieval rate requirements ( $100 per month) ; 3 requests every 5 min with  150MB/s ; default if purchased



Snowball :

100TB per Snowball , Available for 90 days  : Data transferred in to AWS does not incur any data transfer fee :S3 Adapter to read/write  interface to the Snowball client

Snowball Edge : For 40 to 100TB : On-board storage and compute capabilities.

Snowmobile  : For 100PB


Snowball client: The software tool that is used to transfer data from your on-premises storage to the Snowball

Job status can be tracked via Amazon Simple Notification Service (SNS), text messages, or directly in the Console.


  • Files are stored as objects in buckets on an AWS Snowball Edge appliance.
  • The keys are not permanently stored on the device and are erased after loss of power.
  • Applications and Lambda functions do not have access to storage.
  • Lastly, after your data has been transferred to AWS, your data is erased from the device
  • Each Lambda function can be associated with a single S3 bucket on the Snowball Edge
  • You cannot update your Lambda function on premise in my Snowball Edge cluster

  • S3-comptabile end-point change if the primary  node where it is running is disconnected and IP address will change
  • Snowball Edge cluster cannot be preloaded with data from an existing S3 bucket
  • Read operations van be performed but you will not be able to write to the Snowball Edge cluster until a primary node is restored
  • You can continue to use the Snowball Edge cluster even when you are replacing a secondary  Snowball Edge device




http://blog.kiip.me/engineering/ec2-to-vpc-transition-worth-doing/



VPC :

All subnets of VPC are connected to main routing table.
CIDR block needs to be specified which is subset of VPC CIDR block.

There can be only one default VPC in a AWS region
aws ec2 create-default-vpc
aws ec2 describe-vpcs

  • For VPC to access the Internet you can use public IP addresses, including Elastic IP addresses (EIPs).Instances without public IP addresses can access the Internet through a NAT gateway or a NAT instance or through the Virtual private gateway(VPN) to your existing datacenter to access the internet


you will need to enable NAT-T and open UDP port 4500 on your NAT device.
You can expand your existing VPC by adding four (4) secondary IPv4 IP ranges (CIDRs) to your VPC
you can create 200 subnets per VPC
The minimum size of a subnet is a /28 (or 14 IP addresses.) for IPv4
For IPv6, the subnet size is fixed to be a /64.
first four (4) IP addresses and the last one (1) IP address of every subnet are reserved.



  • Nodes launched within a VPC aren’t addressable via the global internet, by EC2, or by any other VPC


  • DHCP option sets let you specify the domain name, DNS servers, NTP servers, etc. that new nodes will use when they’re launched within the VPC
  • You have to manage subnets, routing tables, internet gateways, NAT devices, network ACLs, etc.




Network Address Translation (private subnets)
Connect directly to the Internet (public subnets)


  • Instances in a private subnet can access the Internet without exposing their private IP address by routing their traffic through a Network Address Translation (NAT) gateway (private subnets) in a public subnet.
  • Connect to Amazon S3 without using an internet gateway or NAT, and control what resources, requests, or users are allowed through a VPC endpoint.

  • Virtual network in the AWS cloud - no VPNs, hardware, or physical datacenters required.

  • To interact with Amazon EC2 instances within a VPC : use hardware VPN  between existing network and Amazon VPC




Default VPCs : Created  first time you provision Amazon EC2 resources
VPC can span multiple Availability Zones but subnet must reside within a single Availability Zone.
When you launch an Amazon EC2 instance you must specify the subnet in which to launch the instance
you can specify the Availability Zone for the subnet as you create the subnet
charges apply for network bandwidth  instances reside in subnets in different Availability Zones.
initally 20 Amazon EC2 instances at any one time can be launched and a maximum VPC size of /16 (65,536 IPs).
Instance launched in a VPC using an Amazon EBS-backed AMI maintains the same IP address on reboot.
To enable default VPC for existing account , no EC2-Classic resources and terminate all non-VPC provisioned resources inthat region
or simply create a new account in a region that is enabled for default VPCs.
All instances launched in default subnets in the default VPC automatically receive public IP addresses
To launch an instance into nondefault VPCs specify a subnet-ID during instance launch.

Elastic Network Interfaces :

  • Virtual network interface only for instances running in a VPC
  • Can be attached to an instance, detached and attached to another instance
  • One private IPv4 address
    One or more secondary private addresses.
    One Elastic IP address per private addresses.
    One public address
  • Network interfaces can only be attached to subnet/instances residing in the same Availability Zone and in the same VPC.
  • Primary interface (eth0) on  EC2 instance cannot be dettached.
  • So elastic IP provide kind of persistent public IP address that you can associate and disassociate at will


If the instance fails, you (or more likely, the code running on your behalf) can attach the network interface to a hot standby instance.Because the interface maintains its private IP addresses, Elastic IP addresses, and MAC address, network traffic begins flowing to the standby instance as soon as you attach the network interface to the replacement instance.So elastic IP provide kind of persistent public IP address that you can associate and disassociate at will.

Security groups in a VPC enable you to specify both inbound /outbound network traffic that is allowed through Amazon EC2 instance.

EC2 instances within a VPC can communicate with ...
EC2 instances not within a VPC using Internet gateway or VPN
EC2 instances within a VPC  in another region using public IP addresses,NAT gateway, NAT instances, VPN connections, or Direct Connect connections
S3 using VPC Endpoint  , Internet gateway , Direct Connect or VPN connection.

You can create route rules to specify which subnets are routed to the Internet gateway, the virtual private gateway, or other instances.

Elastic IP addresses allow you to mask instance or availability zone failures
by programmatically remapping your public IP addresses to any instance associated with your account.

Peering Connections can be done with VPCs in the same region even if belonging to to another AWS account
Peered VPCs must have non-overlapping IP ranges
AWS Direct Connect or hardware VPN cannot be used to access peered  VPC
Data is not encrypted in VPC peering




Application Load Balancer :

 Routes traffic to targets  based on the content .For  HTTP and HTTPS


Network Load Balancer   :


  1. Operates at operating at layer 4.Support only support only TCP (Layer 4) listeners.
  2. Where extreme performance is required ,
  3. preserves the source IP of the client
  4. provides a static IP and an Elastic IP  to Load Balancer per Availability Zone
  5. SSL termination is not available with Network Load Balancer.
  6. Support support DNS regional and zonal fail-over
  7. Canot  have mix of ELB-provided IPs and Elastic IPs or assigned private IPs



1  Subnets per Availability Zone per load balancer
50 Number of listeners per load balancer  
20 Number of load balancers per region  

For each subnet , NLB can only support a single public/internet facing IP address.So no more than one EIP in each subnet for NLB
For each subnet , NLB can only support a single  private IP

Network Load Balancer support DNS regional and zonal fail-over with Route 53 health checking and DNS failover features
IP address for targets within load balancer’s VPC : any IP address
IP address for targets outside load balancer’s VPC : ranges (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16) or  (100.64.0.0/10)
Load balancing using IP also allows multiple containers running on an instance use the same port.Each container on an instance can now have its own security group
To load balance to the EC2-Classic instances, Register private IPs of these EC2-Classic instances as targets.registering their Instance IDs as targets doesn't work


Classic Load Balancer :

For applications that were built within the EC2-Classic network.





AWS CloudTrail  : Record all Amazon EFS API calls ,  key usage in log files
Amazon CloudWatch : Monitor file system activity using metrics
AWS CloudFormation  : Create and manage file systems using templates.
AWS Direct Connec : Establishes a private network connection between your on-premises datacenter and AWS

AWS Config

AWS Config creates configuration items for every supported resource in the region by invoking the Describe or the List API call and
whenever there's any change in configurations.

Deliver Configuration Items either to S3 Bucket or SNS Topic

Sends a configuration history file every six hours
Sends updated configuration details to an Amazon S3 bucket that you specify.
Deliver messages to subscribing endpoints such as an email address or



AWS Service Catalog

To create, manage, and distribute portfolios of approved products like servers, databases, websites, or application to end users.
Fine-grain access controls of configuration and provisioning
Catalog administrators prepare AWS CloudFormation templates, configure constraints, and manage IAM roles that are assigned to products to provide for advanced resource management.


AWS Health

Provides ongoing visibility into the state of your AWS resources, services, and accounts.
AWS Health provides a console, called the Personal Health Dashboard.
AWS Health has a single global endpoint: https://health.us-east-1.amazonaws.com.
If you want to be notified or take an automatic action when events change, you can 'Choose the Set up notifications with CloudWatch Events'.
Shows open , close and other notifications.
Personal Health Dashboard is available for all AWS accounts, the AWS Health API is available only to accounts that have a Business or Enterprise support plan








IAM  :


AWS IAM is not region specific and free.
Access Key have ID and secret key


1. Activate MFA on root account.
Manage MFA device (virtual/hardware)
Virtual MFA
   virtual MFA Application --- Install the Google Authentocator
scan  the QR code on screen from the smarphone using virtual MFA Application
It creates two authentication code on scanning in the smartphone.
Enter those codes inthe console.
MFA device is now successfully associated.



https://console.aws.amazon.com

2. Create IAM Users
Genererate access key for each user.(used for CLI).
It will create Access Key ID and Secrect Access key.
Download the credentials in a file.
From User Actions tab, you can create password for user to login to console.

3. Create Groups : SysAdmin
Attach policy to the Group like Administrator Access.
Add user to Group.
Groups --> Group Name --> Permissions --> Attach Policy

4. Create Role
Select Role Type (EC2 / Lamda / RedShift )
Attach Policy to Role  like  Administrator Access.


5.Apply IAM password policy




EC2 Instances :

Launch Ec2 Instance:
Choose AMI :  RedHat Linux /SuSe Linux /Amazon Linux /Windows etc.
Choose Instance type : t2.nano/micro/small/medium ....Memeory optimized or Storage optimized
Configure Instance :  Number of Instances , Network,Subnet ,Public IP ,IAM role  and others
Add storage
Tag Instance
Configure Security Group
Launch
Generate and download key pair

Connect to Ec2 instance server :
ssh ec2-user@<IPofEC2instance> -i <locn_on_pc_for_keypair>




Uploading Custom SSH Key Pair using Ansible :

aws ec2 describe-key-pairs

ssh-keygen -t rsa
   It will create id_rsa and id_rsa.pub


vim upload-sshkeypair.yml
   --and update key_material  and other attributes

ansible-playbook upload-sshkeypair.yml
   -- This will require boto.If not , install boto

pip list  |grep boto

pip install  boto

ansible-playbook upload-sshkeypair.yml



Assign static Hostname  to Instance

1. Preserve hostname
ssh ec2-user@<publicIp>
sudo su -
Add preserve_hostname: true  to  /etc/cloud/cloud.cfg


2.
/etc/hostnames : Add labserver
/etc/hosts     : No need to add actual hostname , Entries are 127.0.0.1
but Add private Ip in hostfile:
ifconfig eth0
Add 172.31.39.81 labserver to /etc/hosts

And reeboot the server with root#init 6

3. Check :
sudo /bin/bash
hostname
uname -a


Assign Elastic IP :

Public Ip get released on instance stop/start.
Elastic IP is static,region specific and belongs to AWS account and can be attached to any one instance(running/stopped) in that region.

Get public IP :
aws ec2 describe-instances --instnce-id i-098dfadddb7 --query Reservations[*].Instances[*].[PublicIpAddress]'  --output text

Get VPC specific elastics IP :
aws ec2 allocate-address --domain vpc

Associate to the Instance :
aws ec2 associate-address --instnce-id i-098dfadddb7 --allocation-id  <allocation-id from above command>

Now check public IP :
aws ec2 describe-instances --instnce-id i-098dfadddb7 --query Reservations[*].Instances[*].[PublicIpAddress]'  --output text


Check details of allocation :
aws ec2 describe-address --allocation-id  <allocation-id from above command>

Disassocaite elastic IP :
aws ec2 associate-address  --allocation-id  <allocation-id from above command>



AMI (Image)


Ec2 Instance server :

MYPC# ssh ec2-user@<publicIpOfec2> -i <keypair>

ec2-user@<publicIpOfec2># sudo /bin/bash

root# shred -u /etc/ssh/*_key /etc/ssh/*_key.pub
root# find -name "authorized_keys* -exec rm -f {} \;
root# rm -rf ~/.ssh

root# root# shred -u ~/.*history

root# find /rot/.*history /home/*/.*history -exec rm -f {} \;

root# history -w
root# history -c



Cli mgmt server :

aws ec2 describe-instances |grep -i i-

aws ec2 create-image --instance-id i-0gus7gd78h --name "RHEL7.20171030" --description "Image creation"




Creating VPC :


Instance require VPC    : awslab-vpc
    vpc needs CIDR block 172.16.0.0/16
     By default , roting table and network ACl is created  on creating a VPC
     if using Direct Connect, change DNS resolution to 'No'


Instance require subnet :

    public or private (awslab-subnet-public/private) : public for Appserver and private for DB

    Both connected to vpc(awslab-vpc) which gives it vpc CIDR block 172.16.0.0/16

    subnet needs to be in a Availability zone

    subnet needs IPV4 CIDR block: 172.16.1.0/24 for public

    subnet needs IPV4 CIDR block: 172.16.2.0/24 for private

    Get attched to the default routing table

    Modify auto Assign-IPs to yes for public Appserver subnet.


    Internet Gateway(awslab-igw-public) needs to be created and attached to the VPC


    routing table(awslab-rt-internet) needs to be created and attached it to the VPC

    Associate the routing table with Internet Gateway : --> Routes --> Edit --> Add another route  destination 0.0.0.0/0 and Target : awslab-igw-public

    Associate the routing table explicitly with subnet: --> subnet Association --> Edit --> select public subnet name --> save


22 and 3110


Optional : Auto Assign Public-Ip

Instance require security group : with public(appserver) or private(DB) connectivity

awslab-sg-database
EC2 -->  Create securit group --> assign vpc
Assign rules : inbound  SSH onport 22 from 172.16.1.0/24
                       inbound   custom TCP  on port 3110  from 172.16.1.0/24
                       inbound  All ICMP IPv4 on port 0-65535 from 172.16.1.0/24

awslab-sg-public
EC2 -->  Create securit group --> assign vpc
Assign rules : inbound  SSH onport 22 from My IP
                       inbound   All ICMP IPv4 on port 0-65535 from 0.0.0.0
                       inbound   HTTP on port 80 from 0.0.0.0


Instance require key pair : Use existing if available


connect to the App server with the Public Ip






CLI Commands :

Users:
aws iam list-users

S3 :
aws s3 sync s3://oldbucket s3://newbucket --source-region us-west-1 --region us-west-2
aws s3 rb s3://bucket-name --force
aws s3 cp MyFolder s3://bucket-name --recursive [--region us-west-2]

Subnets :
aws ec2 describe-subnets
aws ec2 describe-subnets | jp "Subnets[?AvailabilityZone == 'us-west-2b'].SubnetId"

ec2 Instances :
aws ec2 describe-instances
aws ec2 describe-instances --owners 099720109477
aws ec2 describe-instances --filter Name=tag:Name,Values=dev-server
aws ec2 start-instances --instance-ids  i-dddddd70
aws ec2 stop-instances --instance-ids  i-5c8282ed
aws ec2 stop-instances --dry-run --instance-ids i-dddddd70

aws ec2 stop-instances --dry-run --force --generate-cli-skeleton  --instance-ids i-dddddd70
aws ec2 stop-instances --cli-input-json file://stop.json

aws ec2 terminate-instances --dry-run --instance-ids i-dddddd70
aws ec2 terminate-instances --instance-ids i-44a44ac3

aws ec2 describe-instances | jq '.Reservations[].Instances[] | {instance: .InstanceId, publicip: .PublicIpAddress}'
aws ec2 describe-instances --filters Name=instance-state-name,Values=stopped --region eu-west-1 --output json | jq -r .Reservations[].Instances[].StateReason.Message

Launch Instance :
aws ec2 run-instances --image-id ami-22111148 --count 1 --instance-type t1.micro --key-name stage-key --security-groups my-aws-security-group
aws ec2 run-instances --dry-run --image-id ami-08111162 --count 1 --instance-type t1.micro --key-name MyKeyPair

aws ec2 run-instances --dry-run --image-id ami-08111162 --count 1 --instance-type t1.micro --key-name MyKeyPair --security-groups my-ami-security-group

Disable accidental deletion of running instance :
aws ec2 modify-instance-attribute --instance-id i-44a44ac3 --disable-api-termination
aws ec2 modify-instance-attribute --instance-id i-44a44ac3 --no-disable-api-termination

aws ec2 modify-instance-attribute --instance-id i-44a44ac3 --instance-type "{\"Value\": \"m1.small\"}"

Monitor Instance CloudWatch :
aws ec2 monitor-instances --instance-ids i-44a44ac3
aws ec2 unmonitor-instances --instance-ids i-44a44ac3

aws ec2 reboot-instances --instance-ids i-dddddd70 

Display system Log whatever was sent to the system console
aws ec2 get-console-output --instance-id i-44a44ac3

Change Instance type :
aws ec2 describe-instances
aws ec2 stop-instances --instance-ids i-44a44ac3
aws ec2 modify-instance-attribute --instance-id i-44a44ac3 --instance-type "{\"Value\": \"m1.small\"}"

aws ec2 describe-instances

Container Instances :
To add container instances (instances running ECS Agent) to a cluster:
aws ec2 run-instances --image-id $IMAGE_ID --count 3 --instance-type t2.medium --key-name $KEY_NAME --subnet-id $SUBNET_ID --security-group-ids $SEC_GROUP_ID --user-data file://user-data --iam-instance-profile ecsInstanceRole

To de-register container instances :
aws ecs deregister-container-instance --cluster <value> --container-instance $INSTANCE_IDS --force

Security Groups :
aws ec2 describe-security-groups
aws --region us-west-1 describe-security-groups
aws ec2 revoke-security-group-ingress --group-id $SEC_GROUP_ID --protocol <tcp|udp|icmp> --port <value> --cidr <value>
aws ec2 authorize-security-group-ingress --group-id $SEC_GROUP_ID --protocol <tcp|udp|icmp> --port <value> --cidr <value>

Cluster :
aws ecs create-cluster [default|--cluster-name <value>]
aws ecs delete-cluster --cluster <value>

Service :
aws ecs create-service --cluster <value> --service-name <value> --task-definition <family:task:revision> --desired-count 2
aws ecs update-service --cluster <value> --service <value> --desired-count 0
aws ecs delete-service --cluster <value> --service <value>

Volumes :
aws ec2 describe-volumes 
aws ec2 attach-volume  --volume-id vol-1d5cc8cc --instance-id i-dddddd70 --device /dev/sdh
 State will change from available to attached

Image :
aws ec2 describe-images
aws ec2 create-image --instance-id i-44a44ac3 --name "Dev AMI" --description "AMI for development server"


Delete Image:
When you create an image, it also creates a snapshot.

aws ec2 describe-images --image-ids ami-2d574747
aws ec2 deregister-image --image-id ami-2d574747
aws ec2 delete-snapshot --snapshot-id snap-4e665454

Key Pairs :
aws ec2 describe-key-pairs
aws ec2 create-key-pair --key-name dev-servers
aws ec2 delete-key-pair --key-name dev-servers

Tags :
aws ec2 create-tags --resources i-dddddd70 --tags Key=Department,Value=Finance

aws ec2 delete-snapshot --snapshot-id snap-4e665454



Common commands :

aws iam list-users

aws iam list-groups

aws iam list-roles

aws iam list-policies --scope AWS |more




aws iam create-users --user-name George

aws iam create-access-key --user-name George

aws iam create-group --group-name Developers

aws iam add-user-to-group --user-name George --group-name Developers

aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name Developers


aws iam list-attached-group-policies --group-name Developers



cat ec2-role-trust-policy.jason
::::
::::

aws iam create-role --role-name TestRole --assume-role-policy-document file://ec2-role-trust-policy.jason

aws iam delete-role --role-name TestRole
aws iam delete-users --user-name DummyGeorge
aws iam delete-group --user-name DummyGroup





















Videos :

http://meta-guide.com/videography/100-best-amazon-aws-tutorial-videos

https://aws.amazon.com/training/intro_series/

https://www.aws.training/

https://aws.amazon.com/rds/?tag=bom_tomsitpro-20

https://aws.amazon.com/ec2/?tag=bom_tomsitpro-20

https://aws.amazon.com/s3/?tag=bom_tomsitpro-20

Computer Guy: https://www.youtube.com/playlist?list=PLhr1KZpdzukcMmx04RbtWuQ0yYOp1vQi4

Amazon Web Services : https://www.youtube.com/watch?v=N89AffsxS-g

Enureka         : https://www.youtube.com/watch?v=2KcZgdsuMto

SimpliLearn  :  https://www.youtube.com/user/Simplilearn/search?query=AWS


Udemy : https://www.udemy.com/cloud-computing-with-amazon-web-services-part-1/




Free Labs :


https://aws-tutorials.blogspot.in/


acloudgroup
linuxacademy free-2mnths
AWS practice exam
FQAs
Security : Whitepapaer

visual studio dev essentials  : login with outllok.com --> Linux academy   get code--> link ---> activate Now

 AWS labs (via qwikLABS) : https://aws.amazon.com/training/intro-to-aws-labs-sm/

https://qwiklabs.com/

https://www.unitrends.com/products/boomerang-for-vmware

http://cloudacademy.com/blog/10-courses-to-learn-amazon-web-services-with-our-7-day-free-trial/

https://www.lynda.com/AWS-training-tutorials/1536-0.html?utm_medium=affiliate&utm_source=CJ&utm_content=11926&utm_campaign=CD17318&bid=11926&AID=CD17318&AIDCJ=12308797&PIDCJ=4003003&SIDCJ=j31ax0y82c012c6703eux&subid1=4003003

https://www.cbtnuggets.com/it-training/amazon-aws-essentials-architect-sysops



Practice  :


https://www.youtube.com/watch?v=VgNepgYx0AA&list=PLxzKY3wu0_FKbSAQ4KxJKBe040rheXgLW&index=4

Server --- require role --- to access the aws account
Instance ---Actions --- Attach iAM role


User --- User ARN -- Permission --- Add existing policies directly --amazonS3ReadOnlyAccess
                  -- Accesstype --- group (policies)--- user url

Group -- Permissions -- Managed/inline policies --Attach policies-- AmazonEC2ReadOnlyAccess//Billing//AmazonEC2ReportAccess
                     --inline policies  -- policy generator -- Allow/Deny , AWS Service , Action , ARN



Role -- role type( service //service linked// cross-account access // Identity provider) : Amazon EC2 --policy-- AmazonS3FullAccess


Dashborad -- mumbai region-- AMI( Amazon Linux)--Instance type(t2.micro)--instance details(ip/IAM role)--Add storage--name-- Security grp(firewall)--Launch-- Keypair


putty -- Host: IP , username  , use Private key

$aws s3 ls

[[Instance ---Actions --- Attach iAM role]]

Group -- Permissions -- Managed/inline policies --Attach policies-- AmazonEC2ReadOnlyAccess//Billing//AmazonEC2ReportAccess + amazonS3ReadOnlyAccess

$aws s3 mb  s3://bktname
$aws s3 cp /tmp/index.html s3://bktname

You check the https://s3.console.aws.amazon.com

or

$aws configure
Access key:
sceret key:
region:



Blogs :

http://www.thegeekstuff.com/2017/07/aws-ec2-cli-userdata/#more-17644

https://aws.amazon.com/blogs/compute/

https://blog.cloudthat.com/category/aws/

http://www.rightbrainnetworks.com/blog/category/amazon-web-services/