AWS Solution Architect Associate Practice Questions

AWS Solution Architect Associate Practice Questions



hi do you want to bump up your IT career well the best way to do that is to get certified on these Amazon Web Services Platform the aid of the certification is with more for certification in the world and it can bump up your salaries because major companies are looking at individuals who are certified in the aid of this domain so I'm going to use my courses on AWS I am certified as a solution architect or develop associates and as a self administrator I bring to you my courses on udemy if you go to the link you will see my various courses if they are just for 10 USD all of my courses has a variety of lecture my solution architect course has log on this plus lectures and give you an in-depth knowledge on each of the aid of the services it also has a horn and 50 practice questions that makes you more confident in taking the exam so don't wait any longer look at the description click the link and you get the lifetime membership on udemy for my course or just 10 USD you hi and welcome back so this is the quiz 1 question Bank so let's go through the questions and I'll just give an explanation to each answer so question number 1 is regarding the NAT instance so you have the NAT instance launched in the public subnet the NAT instance has been configured when it comes to the network access control list and the security group the private instances of the instances in the private subnet can access the NAT instance but still the private instances cannot access the Internet now if you look at the answers it's basically either to disable or enable the source destination check the source destination check first of all has to be done on in that instance and when it comes to this setting you have to disable the source destination check this is because when a request is made form the private instance to beam our instance to the internet that request gets the source IP of the NAT instance and not the source IP of the private instance where it goes on to the Internet so normally when you have an easy-to instance in a V PC and it's any quest to the Internet normally the source and destination are checked by the instance so that when the response does come back the response will actually come back to that ec2 instance but in the case of NAT instance remember that the request will have not the source of the private instance but it will have the source IP of the NAT instance the NAT will then for the request of the Internet and then send off or back the response to the instance in the private subnet so in such a case the NAT instance should not be checking the source and destination IP because the source IP will be the GNAT instance itself so in such a case you have to make sure that you disable the source destination check on the NAT instance Here I am in the aid of this console let us quickly go on to easy – let's launch an ad instance quickly so you first have to go on the community mi s– and basically search from that so I'm assuming yogurt up Aldi try this out if you've not you can actually just visit my a double solution architect course I have a video on how you can provision an ad instance so the ami needs to have the keyword NAT you can select the NAT instance we'll choose the teacher of micro instance type we will configure the instance details so I'll put this in my default V PC R click on next to add the storage I'll leave the storage as it is if you want you can add a tag so this is going to be our NAT instance the security group is important so if you want to ensure that you can receive requests on port 80 you can choose the security group which I have over here but again it depends on what you wanna allow if you want allow SSH on port 3 to make sure that's part of the security group and then finally launch the instance I have a keeper so our model accept the keeper and click on launch instances I quickly go to view instances so now the NAT instance is being provisioned so now the NAT instance is the provision if you go on to actions and you go on to networking you see the change source destination check so currently it's enabled and if you click on yes disable the source destination check will be disabled so remember this is important if you want the request to be routed by the nut instance to the internet from instances in the private subnet so let's go on to the next question so which are the following settings are true about the role for key resource records so first is a scene in record can be created for your zone FX this is invalid so this is wrong so let us go on to route 53 I have won hosted zone in the hosted zone my zone apex is basically a a DNS name known as commitment hub calm now let's create a record set so the one of the options is can you create a cname record for these order so the zone FX is basically real commitment opcom now if I just give it as so I'm just giving a URL which will be the senior and I'll click on create and it says that C names are not permitted at the apex in the zone so that option is valid you can create a cname record for the zone FX record let's go back so actually the right answer is that rule 53 cname can point to any DNS record hosted anywhere so you can create a cname record which I said points to any other record either could be on-premise or it could be on Internet and the last option says the time to lift can be set for an alias record so what is the nature of course on alias record can actually map to a resource so in elastic load balancer or ns3 website bucket hosted in AWS if you go on drop of 3 and we click on yes this is an alias record so obviously if you have s3 websites or elastic load balancers lawful distributions elastic being socks you can actually set it accordingly but you cannot set a time to live for that record by example if I click on know you can see that I have the option to specify the time to live for this record but where has soon as you change it to an alias occurred you can't have a time to live so our important points remember when it comes to raw or 53 let's go on to our next question so here it says that which of the following resources are available at the global level and it is pretty important now when you have a disaster scenario let's say you are planning for disaster recovery in your AWS infrastructure and normally in disaster recovery you will have your resources either in another availability zone or you have your resources replicated in another region in AWS so you need to know what are the services which are available at the global level and which services are available at the region level which need to be replicated to another region here clearly its identity and access management so one clear distinction is that let's go on to the aid of this console so here I am in the aid of this console you can note that here I can see the region so if I go on to easy – so in easy – you can see that you have specific regions that means this is a region specific resource or a service let me frame it like it's a region specific service now let's go to identity and access management and now when you go to identity and access management you will see that it's now at the global level so there is no region specific details so when you create an I'm user you create I am group or you created I am role it's available at the global level let's look at another service it is available at the global level so let's go on to route 53 so if you go onto Route 3 you can see this is also available at the global level and because this is the domain name system for the entire set of resources in your AWS infrastructure on to the next question so question number 4 so you have a web application which needs to store user session state so which of the following services provides shared data store with durability and low latency now the keywords oil durability and low latency when you look at a shared data store and especially when you have session state you can use dynamodb you can use a database or you can use elastic cachet but when it comes to durability and low latency DynamoDB wins over elastic a shame if you go on to the dynamo DB documentation so if you go to even to the home page of the documentation for dynamo DB it clearly mentions that dynamo DB is a service with high durability and low latency so between the options of elastic Asha and dynamo DB I would choose dynamo DB to store the user session stage if I require durability and low latency let's go on to the next question this is a pretty easy question because based on the requirement here it says the requirement to have a dedicated line from your on-premise infrastructure to your aid of this V PC so if you want to connect your on-premise to your a CC PC you can use either Virtual Private gateway or you can use the direct connect the Virtual Private Gateway routes request through the internet so you would have some latency order so if you want a direct connection from your on-premise to your V PC you need to use AWS direct connect so if you want to transmit a lot of data from on-premise to your a double base architecture then Direct Connect should be used let's say you know for example you have a database on your on-premise location and that is your staging areas let's say you want to replicate the data on to a test environment on to AWS so you are basically provisioning a test environment on the AWS so you want transfer the data from say an Oracle database it is on your on-premise to AWC PC now if you use a virtual private gateway which goes over the internet you would find it would take quite a long time you would probably a connection timeout so you probably have drop packets so there's a whole lot of problems that you might have when you actually choose the word supply or Gateway in such a case if you won't really have a sort of throughput efficiency when transferring data from your on-premise to VPC use direct let's go on to the next question again this is pretty simple so as I mentioned before if you want to store session state you can use elastic caching you can use dynamodb and you can use the relational database service so now we come to which of the below processes is the responsibility of AWS so this comes to the fact of the shared responsibility model so I hope you all have actually gone through the shared responsibility model so what's the responsibility of the customer what's the responsibilities of AWS like for example the data your encryption taking care of all the fiber configuration networking cetera that's all with the customer so AWS provides the infrastructure foil the compute facilities the storage facilities the ability to encrypt your EBS volumes etc they provide that infrastructure it's up to you has the customer to make sure you use those services to safeguard your infrastructure or and your data so if you go back to the question it says which of the following process is responsibility of AWS so encryption of data stored on any device that's what the customer AWS provides encryption so for s3 you can enable encryption if you want to but it's up to the customer to enable the encryption if you're going to see the background virus kinds of EBS volumes and snapshots so the customer can have virus Sophos installed on the ec2 instances to scan against volumes and snapshots the application of data across multiple a SS regions now a lot of services which are provided by AWS such as the simple storage service or dynamodb replicates data across multiple availability zones or multiple data centers to ensure that the service itself is durable but when it comes to regions replication of anything or data across regions has to be done by the customer so if you have a disaster recovery scenario where you have your infrastructure another reason in case your primary infrastructure fails if you have a data base in both regions to ensure that the backup database is sync with the primary database you have to ensure that you have the replication done via scripts so why it comes to the decommissioning of storage devices so that is the storage layer is basically taken care by AWS so that infrastructure so which of the following will occur when a ec2 instance with a public IP is stopped and started the lot of things which can happen so mostly is that the ec2 instance can lose its public IP so to change because when an ec2 instance is stopped and started it will either start on the same physical host or another physical host but this question pertains more to storage now you know the instance store devices services we instance store em is where in the data is located on the same physical host where the ec2 instance is launched so when you instance is stopped and started the data will actually be lost but when it comes to the elastic block storage this is storage which is located on basically a SAN storage device so the data on this device will not be lost because you can disable the termination of this volume so this question is pretty straightforward you won't issue API calls from an easier instance you have to ensure that you assign an identity and access management role so for example if we have code that's running on an ec2 instance that needs to access a resource such as s3 or dynamodb use an IM role so CloudFront how does it delivers content pretty straight forward edge locations not available zones it's not regions so AWS has or maintains availability zones regions and edge locations but the one that is used this distribute data while often is edge locations so now this is for the extending of your on-premise infrastructure onto the edge of this cloud so you are using the virtual private gateway on the AWS side the connection is known has a VPN connection now on the customer side need something known has a customer gateway so you will connect the customer gateway and the virtual private gateway and establish a VPN connection in order to establish a connection on Internet the customer gateway obviously needs to have an IP address that IP address has to be routable through the internet this is because the VPN connection is established over the Internet so obviously if you want your packets or your data to be transmitted from your on-premise infrastructure to the aid of your CPCs over the Internet it needs to go through this customer gateway and the customer gateway needs to have an IP address that is static and audible through the internet so this is with regards to auto scaling of ec2 instances so you are seeing that instances are being spun up quite fast in an AR so how do you ensure that so this is like the scale-up you are scaling up you're doing an on-demand scale-up of your instances in the auto scaling group and you're noticing that a lot of the instances are just being spun up India so you need to insure the instances are be increased in the right proportion so for this the answer is the right cooldown period so if we go to our AWS dashboard if we go on to easy to that so you can actually see auto scaling so let's quickly go on to auto scaling groups are or scaling group I'll choose one of my existing launch configurations go next step let me give a name for the auto scaling group let me choose subnets it's always good to choose multiple subnets in an order scaling group so that the ec2 instances are spread evenly across the availability zones so next when you use a scaling policy you have something known has you know the instances need these may number of seconds the default is 300 to warm up after scaling this is very important you need to understand how long it takes for your instance of any instance is added to the auto scaling group the instance process to boot probably some scripts need to run then the application needs to stabilize itself using this new instance which is added to the group so all of this is done during this cooldown period so if you specify a least amount of time so let's say 300 seconds of 5 minutes is not enough so during the 5 months the instance has not finished running it scripts it's still trying to spin up in that time be again the metrics are being fired you know your maybe you have a metric on Sigma threshold and again that metric is going to fire the alarm is being fired and it's again scaling up the instance so before even giving a chance to the previous scale-up instance to kind of finish off its scripts finish off its you know adding itself to the application it's already spinning up a new instance so you need to ensure you have a good cooldown period after you try to do the next set of scaling in the scaling group so let's go on to the next question so this is regards to the elastic MapReduce I have actually discussed in really great detail in my a double solution architect course so I have a separate chapter on explaining the elastic MapReduce process so here you have a large file so what is the elastic MapReduce process you have multiple matha processes which process files so if you have a large file a 1 TB file and you have a map of it is only working in the 1 TB file it will obviously take a long time the best way is to basically make the file size small so split the files into multiple files so that you can have multiple mapper processes running alongside each other processing that file so that the job finishes faster that's one thing any utilizing the movement process you're utilizing your nodes etc so this answer is just changing the file size and making sure use more map up tasks simultaneously so let's go on to the next question services for cloud front now when you have a clock on distribution that has s3 has the origin by default users will have public access to s3 now if you don't want that to happen now normally the users will only access the cloud front URL but they could access the SD URL as well so if you want to protect your s 3 you know bucket and make sure that users only go through the cloud phone URL then you can use the chlorophyll origin identity this is use has an extra safeguard action when you're configuring cloud front with an s3 as the source now out is the limit of elastic IPS sometimes I I do get asked as to you know we'll really the exam ask what's the limit of elastic IP square region don't be surprised if you do get this question why is it important from a solution architect or social perspective well if you're working in an organization you need to know what's the limits that up there for services in a cloud provider so let's say you have some scripts which are you know automating the creation of instances in the script we're also automating the attaching of an elastic IP to an instance and you just you know blindly keep on adding moisted instances and you don't have and you will have the knowledge on the limit of elastic IPS and what happens is that the ec2 instances that get spun up so maybe you have the sixth instance which is spun up will not get an elastic IP and then you will be knockin you head trying to figure out what's going wrong so you need to have a good understanding on the limit if you go on to the a doubles console let's go onto easy too there is a separate section called limits so if you go onto limits you can actually see all the limits over here I think if I scroll down you can actually see the limits on the number of elastic IP so that's over here so the limits are there in your area was console this is to notify you has an admin or the architect to understand what's the limits of the resources in AWS you can always request for a limit increase but I love it if you have automation and a lot of organizations are going to an automation you going to send these limits and make sure you request for the limit increase early on before the automation kicks in so in I am what are the access keys which are granted to a user this is pretty simple it's the access key ID and the secret access key so if you go on to a double console if you go on to my security credentials if you go on to any user so I'll go on to a demo user if you go into security credentials so you have one thing is to manage the passes so you can give a password to user and you can give credentials the password is used to login to the console and the access keys which is over here so you have the access key ID if you click on create access key so you will get the access key ID and the secret access key this is if the user wants to access the a WC sources wire API calls via T SDK wire the command-line interface so now when it comes to the support plants so these are the support comes available by AWS again it's important so an architect's job is just not to you know know what services to use where but also know when it comes to what are the services offered when it comes to support when you handle your infrastructure to your IT support team they will want to know what is the type of level of support you've actually purchased from AWS so this is pretty simple you want to access resources via Java SDK this is a particular resource it could be a developer what you give them you give them access keys the access keys is then used to access from the hdk I have a separate course on the ADA was developer which shows how to use access keys with n SDK and so on that I use a dotter program to access resources on AWS and I show how to use access keys in detail so next is you have a user which wants to use the ADA is console pretty simple a password to be used along with the ID which of the following features of the VTC is stateless in nature so it's pretty important you have the security groups any of the network access control is these security groups are stateful in nature so when you make a request if the request is allowed the response to that request will automatically be allowed but it's not the same case at network access control list so if a request is allowed it is not a guarantee that the response to that request will be allowed this depends on inbound and outbound rules defined for the network access controllers so make sure you understand the stateless nature of the network access control list so welcome back to the next series of questions so let's go on in this question we've been asked which of the following services provides underlying access to the ec2 instance we have the elastic load balancer the RDS the elastic cache and the elastic beanstalk now the elastic load balancer is so so you don't have any you know device which you can actually connect it to control the elastic load balancer the control is with AWS we've seen when you spin up the relational database service you can only specify what is the underlying instance capacity to host the relation database but you don't have the ability to connect to the instance which hosts the relational database the same is it elastic Ashi it's just a service you again just tell what is the underlying instance capacity but you can't connect to the other line instance so the only option over here which allows you to connect to a situ instances is the elastic beanstalk so here are an elastic Beanstalk service running so if I click on the environment and if you go on to configuration so here you can see the instances if you click on settings you can actually choose a key pair so you can choose the key pair and then use the keeper to log on to the underlying dc2 instance on to the next question so you have an application hosted in AWS you have now published an update to the application which listens on a custom board so in order to ensure clients can connect to this new port to this application update what should you do you should also assume that the firewall changes have been made on the citizens itself so the answer here is to make changes to your security group remember the network access control is by default or the inbound traffic is allowed unless you basically deny some access but in a security group by default all access is denied unless you allow specific traffic into your instance so here I am in the ec2 console now if I go on to any server let's go on to say the security groups so this instance has a security group which has these inbound rules so it's allowing HCP connections the surface connection and things if you want to have a custom for define so let's say you have an applications that's listening on both number 3000 on this server you create the rule if you want the connection from earlier and you can specify anywhere and click on save so once the rule is defined remember that the effect is immediate so as soon as you declare this rule you should be able to access port number 3000 on the server next which of the following statements is false so you need to go to the aid of this documentation or import/export and snowball serves so you can import data into s3 into glacier into EBS using the snowball cells you can also export data for mastery but you cannot export data from glacier so which of the following is now the right type of storage gateway this should be pretty simple so you have Kate like a ship volumes stored volumes and the virtual tape library you also have a file gateway which is the another type of gateway given by AWS but you don't have anything known as gateway accessed volumes now here you have an image processing application you want to ensure that you keep the cost of storing the images to a minimum so you could use the standard storage service that's the standard storage but if you want to minimize the cost you can use the reduce redundancy storage a feature available from AWS for s3 so here you pay less the only thing that you could lose objects the poverty is very less you can then use lifecycle policies to delete the image at a later point in time which of the following is not a valid s any subscriber so remember all these subscribers here you don't have s W so here I am in the simple notification service so if I click on create subscription you can see these are the endpoints or protocols available so over here we don't have the simple workflow service now you have been requested to track all the changes to your aid of this environment by security officer in this case you should blindly choose aid of this cloud rain because cloud trail is the service provided by AWS which is very good for security audits so this makes the track and stores events for all be a calls nature a double account this can be done across regions you can also make sure that the logs from cloud trail are stored in n s3 bucket which can they be processed by any analytical service at a later point in time so here you have a requirement to reduce the network latency between ec2 instances here you will use personal groups based on groups is an important concept so if you go on to your ec2 dashboard and you have this option of placement groups so first thing is to create a simple placement group and after you create the placement group when you are creating your ec2 instances you have to make sure you choose the right type of instance type so for example if you choose the TE 2 dot micro that is not supported for placement groups so make sure you look at data base augmentation and see those instances which are supported by placement groups then launch all these instances in that place in group again I have a detailed chapter on placement groups in my edible solution architect course again we come to a question of which of the following services provides underlying access to any set instance it is EMR service so the elastic MapReduce service also allows you to log into the ec2 instances now you want to enable encryption for an EBS volume when she do this so this has to be done when you're creating leave this volume when you upload a file to s3 what is they should be code that's returned so by default or the requests are going through the HTTP protocol so when any successful request is made the default is the 200 ok status code which is returned so the same is with s3 when you upload the file to s3 if it is successful you will get the 200 ok code has a response so which of the following services is the durable key value store this is a newly question it has to be the simple storage service so you work for a company and you want to have a hybrid cloud approach so you have your on-premise infrastructure you want to connect to your you know V PC in AWS you're going to be using a VPN connection through the internet so what's required for the VPN connection so you have two things one is the customer Gatorade the customer gateway remember is on the customer side and then you have the virtual private gateway which is interface you have in the aid of your side you will assign a static IP address to the customer gateway and you will assign a public IP address to the virtual private gateway so again this question pertains to Soaring photos reducing costs keeping it as low as possible so for this you can use the reduce redundancy storage so here you have a question that you have three the ec2 instance is running in a public subnet you have the internet gateway it's attached to the V PC so one job is done so one pre visit so one prerequisite is done you have the router tables properly configure that's also done so the only thing left is that we need to ensure instance either has a public IP or an elastic IP so the most open option over here is why you cannot connect it from the outside is because an elastic IP has not been assigned to that instance which of the following services allows you to work with chef recipes by default it's ops work so of smoke inhalation service that's available with AWS that allows you to use your custom-built chef recipes in the life cycles which is available in the ops work stacks so here you have a question wherein you want to use a database service you won't reduce the amount of admin activities you also need to ensure your database supports complex queries and table join so now when you hear this requirement immediately you should be triggered saying that this should not be DynamoDB because dynamodb does not support complex queries and table joins so you have the option of using Amazon RDS and things you want high variability you would choose Amazon RDS with the Mulkey easy feature so it's other are true for encrypted EBS volumes remember that the snapshots are automatically encrypted form encrypted Amazon block storage volumes so which of the following are true with regards to the encryption of EBS volumes well they are supported on all EBS volume types so here I am in the ec2 console or the dashboard now if I go on to volumes if I go ahead and create a volume so you can see this encryption feature so no matter which volume type I choose server this provision I ops SSD coalesces II so all the options have the facility to encrypt the volume so which of the following encrypts data at rest by default this is Amazon glacier please remember this as a point for the exam you have a video processing application so it wants to upload videos from users so which is the durable storage by default remember this should be the simple storage service the entire idea of the simple storage service is to provide you an object storage which is durable highly available easy to use has the ability to store watch like any you know amount any type of data and any amount of data so choose Amazon s3 if you have EBS volumes you can sort but then again you have the maintenance overhead of Manning medias volumes you know managing if it goes down etcetera so there you use Amazon s3 so here you are working for a company now they're having a problem with the on-premise storage they want to expand they won't extend so they want to use the you know the storage gateway service so here it's which gateway service they should use so in this question there is a requirement that they should be able to access frequently accessed data and because of this you should choose Gateway cashier volumes because in the Gateway cash advance as feature suggests whatever data is accessed more frequently is cached on premise so that it can be accessed by applications posted on your on-premise environment next you are deploying a bastion host so remember a bastion host has to be deployed in a public subnet you also have to ensure that when you want to connect to the bastion host so you have a security group when you configure the security group the access should only be provided for the Machine convey you are going to connect so let's say for example in my easy to console this is a bastion host now if I go to the rules so currently the no rules define so now if you want to define a rule to connect to this bastion host you should ensure that you choose the safe it's a Linux instance you will choose the ACS protocol and you should not put the address has anywhere right so don't choose the addresses anywhere instead see what's the static IP so fear of a public routable static IP from a Windows workstation in your environment or even a Linux workstation for that matter or Linux server make sure that you put the specific IP address so let's assume that this is the IP address of my machine put the specific IP address the block or this header block just specifies that this is the single and alone IP address that should be part of the source so here you are getting a network error or a connection timeout so this is a case so this is if you are using the wrong private key so let's say your SSH it into ilanics instance or using putty make sure that using the private key file which we use to launch your instance so here you want to point your zone FX to the load balancer using it off with eg remember that in one of our earlier questions we saw this so if you want to point elastic load balancer to an s3 bucket to a cloth on distribution you can use an alias record that's the first thing and next thing you need to use the a record so which of the following instance types are available has Amazon EBS back so this is a general-purpose k2 now I recommend are going through the different types of instances available in AWS so in key – you can see that the storage is EBS now for some of them the storage for example is SSD so for m3 it doesn't support EBS back it's only SSD storage so this is important from an architect point of view so you should understand which you know engine type supports SSD and which supports EBS volumes so here you have 3 V pcs BTC 1 e PC 2 and V PC 3 so there is a bearing connection between V equals e 1 and E 2 V PC 2 and be physically what you want to request to be routed from V PC 1 and V PC 3 so can you use transferred appearing so since there is already a connection between you know 1 & 2 & 2 & 3 does it indirectly mean that we can go from 1 to 3 well you can't so V PC pairing connections are not transitive hence you have to create a new appearing connection from the pc-12 v pc 3 if you want request to be floated in this direction here we have the question of you have been allocated and elastic IP so you've requested for an elastic IP now what steps should you use to ensure that the elastic IP is being used properly from a cost optimization perspective remember that if you keep the IP unallocated so you don't allocate it to an instance or you allocated to an instance which is in the stop state then you will incur a cost so always ensure that you allocate the elastic IP to a running instance now here you want to monitor access to s3 buckets so remember that you can enable logs in s3 you can enable logs in the elastic load balancer and here you can actually see what are the requests coming on to s3 now remember that AWS cloud clear records all EPA calls and this question is very specific to s3 so use the s3 logging facility so here I am in s3 so if go on to any bucket and you go on to properties if you go on to login here I've already any of the login you can choose the destination bucket and then click on save and the logs start flowing into the new destination bucket now here you are trying to upload a video that's 5 GB to s3 you are seeing a degraded performance so the recommendation from a herbalist is that if you have a file size greater than 100 MB then use the s3 multi-part file upload the aw snowball is used in case you want transferred data that's in terabytes so the snowball device can transfer around hundred TB on to AWS now you have mobile app that needs to access the simple storage service which will use sam'l is normally used when you want to access the console from your on-premise infrastructure C if you want to access the aid of this console from your on-premise you'll use sam'l cross account is if you want one account to access resources in another account in AWS you use web identity Federation if you want a mobile app to access resources in AWS this question is pretty simple you want Ana's real-time beta Canisius is the option so Kinesis is specially built for analyzing data in real time which of the following is not a category the trusted adviser is an important topic so make sure you go through the trusted adviser service so here I am in the AWS console so if you go on to the Magnum tools and crush it advisor so you will see the 4 options in which you will be getting recommendations so you have the cost the performance security and fault tolerance you don't have the high availability here you want to connect your on-premise Active Directory to the AWS console so for this AWS provides the directory service ad connector so if you want to connect your on-premise because normally organizations will already have all their users and groups and permissions defined in trip directly so if you want to use that just use the ad connector and for the final question so you want to monitor the eye ops metrics for your relational database service make sure use cloud watch and SNS


6 thoughts on “AWS Solution Architect Associate Practice Questions

  1. Hi This is Swarup from India, I am preparing for AWS associate exam. I want to give exam with in July. Can you provide updated exam question or tips to pass the exam. I want to know any changes on Exam module from August 2018? Please suggest.

Leave a Reply

Your email address will not be published. Required fields are marked *