amazon web services - Creating n new instances in AWS EC2 VPC and then configuring them -


i'm having hard time doing seems standard task i'm hoping can me. i've googled crazy , of examples not in vpc or use deprecated structure makes them wrong or unusable in use case.

here goals:

  1. i want launch whole mess of new instances in vpc (the same code below has 3 hundred)
  2. i want wait thoseinstances come alive
  3. i want configure instances (ssh them, change hostname, enable services, etc. etc.)

now in 2 tasks. create instances in 1 playbook. wait them settle down. run 2nd playbook configure them. that's i'm going because want moving - there has 1 shot answer this.

here's have far playbook

--- - hosts: localhost   connection: local   gather_facts: false   tasks:     - name: provision lunch       with_items:         - hostname: eggroll1         - hostname: eggroll2         - hostname: eggroll3       ec2:         region: us-east-1         key_name: eggfooyong         vpc_subnet_id: subnet-8675309         instance_type: t2.micro         image: ami-8675309         wait: true         group_id: sg-8675309         exact_count: 1         count_tag:           name: "{{ item.hostname }}"         instance_tags:           name: "{{ item.hostname }}"           role:   "supper"           ansibleowned: "true"       register: ec2      - name: wait ssh come       wait_for: host={{ item.private_ip }} port=22 delay=60 timeout=900 state=started       with_items: '{{ec2.instances}}'      - name: update hostname on instances       hostname: name={{ item.private_ip }}       with_items: '{{ec2.instances}}' 

and doens't work. is

task [wait ssh come up] ************************************************* [deprecation warning]: skipping task due undefined error, in future fatal error.. feature removed in future release. deprecation warnings can disabled setting deprecation_warnings=false in ansible.cfg.  task [update hostname on instances] ******************************************** [deprecation warning]: skipping task due undefined error, in future fatal error.. feature removed in future release. deprecation warnings can disabled setting deprecation_warnings=false in ansible.cfg. 

which makes me sad. latest incarnation of playbook. i've tried rewrite using every example can find on internet. of them have with_items written in different way, ansible tells me way depricated, , fails.

so far ansible has been fun , easy, making me want toss laptop across street.

any suggestions? should using register , with_items @ all? better off using this:

add_host: hostname={{item.public_ip}} groupname=deploy 

instead? i'm wide open rewrite here. i'm going go write in 2 playbooks , love suggestions.

thanks!

****edit**** it's starting feel broken or changed. i've googled dozens of examples , written same way , fail same error. simple playbook now:

--- - hosts: localhost   connection: local   gather_facts: false   vars:     builderstart: 93     builderend: 94   tasks:     - name: provision lunch       ec2:         region: us-east-1         key_name: dakey         vpc_subnet_id: subnet-8675309         instance_type: t2.micro         image: ami-8675309         wait: true         group_id: sg-ou812         exact_count: 1         count_tag:           name: "{{ item }}"         instance_tags:           name: "{{ item }}"           role:   "dostuff"           extracheese: "true"       register: ec2       with_sequence: start="{{builderstart}}" end="{{builderend}}" format=builder%03d       - name: newies       debug: msg="{{ item }}"       with_items: "{{ ec2.instances }}" 

it couldn't more straight forward. no matter how write it, no matter how vary it, same basic error:

[deprecation warning]: skipping task due undefined error, in future fatal error.: 'dict object' has no attribute 'instances'.

so looks it's with_items: "{{ ec2.instances }}" line that's causing error.

i've used debug print out ec2 , error looks accurate. looks structure changed me. looks ec2 contains dictionary results key dictionary object , instances key in dictionary. can't find sane way access data.

for it's worth, i've tried accessing in 2.0.1, 2.0.2, , 2.2 , same problem in every case.

are rest of using 1.9 or something? can't find example anywhere works. it's frustrating.

thanks again help.

don't this:
- name: provision lunch with_items: - hostname: eggroll1 - hostname: eggroll2 - hostname: eggroll3 ec2: region: us-east-1

because using flushing info ec2 in item.
receiving following output:

task [launch instance] ********************************************************* changed: [localhost] => (item={u'hostname': u'eggroll1'}) changed: [localhost] => (item={u'hostname': u'eggroll2'}) 

but item should this:

changed: [localhost] => (item={u'kernel': none, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-29-85.ec2.internal', u'public_ip': u'54.208.138.217', u'private_ip': u'172.31.29.85', u'id': u'i-003b63636e7ffc27c', u'ebs_optimized': false, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': none, u'block_device_mapping': {u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': true, u'volume_id': u'vol-37581295'}}, u'key_name': u'eggfooyong', u'image_id': u'ami-fce3c696', u'tenancy': u'default', u'groups': {u'sg-aabbcc34': u'ssh'}, u'public_dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'state_code': 16, u'tags': {u'ansibleowned': u'true', u'role': u'supper'}, u'placement': u'us-east-1d', u'ami_launch_index': u'1', u'dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'region': u'us-east-1', u'launch_time': u'2016-04-19t08:19:16.000z', u'instance_type': u't2.micro', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'}) 

try use following code

- name: create sandbox instance   hosts: localhost   gather_facts: false   vars:     keypair: eggfooyong     instance_type: t2.micro     security_group: ssh     image: ami-8675309     region: us-east-1     subnet: subnet-8675309     instance_names:       - eggroll1       - eggroll2   tasks:     - name: launch instance       ec2:         key_name: "{{ keypair }}"         group: "{{ security_group }}"         instance_type: "{{ instance_type }}"         image: "{{ image }}"         wait: true         region: "{{ region }}"         vpc_subnet_id: "{{ subnet }}"         assign_public_ip: no         count: "{{ instance_names | length }}"       register: ec2      - name: tag instances       ec2_tag:         resource: '{{ item.0.id }}'         region: '{{ region }}'         tags:                   name: '{{ item.1 }}'           role:   "supper"           ansibleowned: "true"       with_together:         - '{{ ec2.instances }}'         - '{{ instance_names }}'      - name: wait ssh come       wait_for: host={{ private_ip }} port=22 delay=60 timeout=320 state=started       with_items: '{{ ec2.instances }}' 

assumption ansible host located inside of vpc


Comments

Popular posts from this blog

Ansible - ERROR! the field 'hosts' is required but was not set -

customize file_field button ruby on rails -

SoapUI on windows 10 - high DPI/4K scaling issue -