Is it possible to do just one reboot with vmware tools and vm hardware upgrade?
thinking of upgrading the vmware tools
then briefly shutdown the vm
then upgrade the vm hadrware
power on the vm
thoughts?
Is it possible to do just one reboot with vmware tools and vm hardware upgrade?
thinking of upgrading the vmware tools
then briefly shutdown the vm
then upgrade the vm hadrware
power on the vm
thoughts?
Hi Guys,
I know this is already online
However in my vSphere client i'm getting different information is there someone who can confirm 1000% what is the case. I'm using
VMware ESXi 5.5.0 build-1746018
VMware ESXi 5.5.0 Update 1
Any info is greatly appreciated!
It is expected. Even though 5.5 supports max VMDK size 62TB, VI client will continue to show it as 2TB as all new features from 5.5 are supported in Web client. If you check the same from web client, it will definitely show 62TB VMDK as max size. If you want to create VMDK size more than 2TB, you will have to use web client. All features upto 5.1 are fully supported on VI client.
Thank you for that immediate response !!!
Sorry to bring this thread back to life, however I thought I would check and see if you were ever able to get a resolution to your dilemma? We are having the same error.
The purchased licenses are vSphere vCenter 5 Standard, vSphere ESXi 5 Standard, and vSAN. vSAN was deploy via bootstrapping the vCenter onto a single ESXi host and then building the vDS and vSAN cluster with the other two hosts. All of which was deployed with evaluation licensing and once configured we applied the vCenter license, vSAN license, and the first ESXi Standard host license without error. When we tried to license the second and third host we got an license downgrade error because we have the Virtual Distributed Switch enable and that feature is not in the Standard ESXi license.
We have an open support case with VMware however there seems to be a knowledge gap on what is included with vSAN and how to successfully apply those licenses. Any info would be great.
Thanks,
Bruce
Just like jballack replies, it is best practice to keep those types of traffic separated. It is fully possible to let the NFS share reside within the same vlan as you use for vmotion, but best performance and traffic separation is done using separate vlans. I do assume your switch is able to use vlans.
Another good reason to keep traffic separated in different vlans is that you on most switches can then use a MTU size of 9000 instead of default 1500 for vmotion, and this is much better for large file transfers, but still keeping MTU size 1500 for other traffic.
A quick tip on the same side, i have experienced on some switches, Cisco in particular that if you put all traffic on a single port, and have regular traffic untagged (native vlan) then you can have hickups and drops, but if you set native vlan to something else (a non used VLAN) then you can let the "regular" VLAN be tagged as well, and this is in my experience the best way to do it.
This is a sample lab config I have used for some mini labs (all IPs are samples):
Esx1:
Nic Team1:
Native vlan on switch set to 999 (not used really), Trunked port on switch.
Vmk port With Vlan 1 (10.0.0.10) for administrative traffic (managment)
Vmk port With vlan 2 (10.0.2.10) for NFS Storage
Vmk port With vlan 3 (10.0.3.10) for vmotion, MTU set to 9000
Standard ports for VM traffic With vlans 4-10
Esx2:
Nic Team1:
Native vlan on switch set to 999, trunked port on switch
Vmk port With vlan 1 (10.0.0.11) for managment
Vmk port With vlan 2 (10.0.2.11) for NFS Storage
Vmk port With vlan 3 (10.0.3.11) for vmotion, MTU set to 9000
Standard ports for VM traffic With vlans 4-10
Storage server:
Nic 1:
Managment on untagged vlan 1 (10.0.0.12) for managment
Tagged vlan 2 (10.0.2.12) for NFS shares
With this setup i had best traffic separation and good troughput. NIC teaming is the available for all types of traffic as well
Take a look at the thread below. Hopefully it will shed some light on the situation.
In fact what you need to look into is routing as well, and how the applications accept failover to a different location.
If you have a transparent network between the locations, and automatic failover with gateways and IP routing on the second location, then you only should look into the use of a witness server to allow the sites to decide wich one is really separated away from the "world".
So without explaining the basic network setup and location addressing, you cant really get a specific answer.
A good budget way to acheive ideal conditions is actually using nested ESXs.
Lets say you get a couple of fairly good host servers with a decent amount of memory, then you can set up as VMs more ESX hosts and simulate larger enviroments. This also gives you the benefit of organizing it into vApps and be able to redeploy fast.
I highly recommend you get a good switch to simulate the challenges with vlans and such segmentation.
Another option is also a well specced workstation equipped with vmware workstation.
In short, the options are many, and you should find some nice setups out there. Personally i am blessed with a massive homelab, but i also have a miniature setup for quick tests and mini labs on a workstation.
Build the lab according to Your Budget and the labs you wish to work on. If you are able to get a couple of HP PL 380 G5-G6-G7 that you can equip With SATA disks, then you have a very solid lab Equipment that isnt to noisy. I did try With Dell servers, but after my wife got tired of the "jet engines" in the attic i switched to HP servers, and now we barely can hear my Whole lab running.
Bump... Does anybody have any ideas?
I would say dual Nics is almost a must for labs, as you can introduce complexity and separation to lab on.
Having the datastore on NFS storage is good enough if you do use a Gigabit switch, and have in mind the VMs datastore isnt accessed for full transfers, just for changes, so its not that high traffic levels for normal use.
I would say the best investments for a lab is memory for the hosts and a good switch.
Installing ESXi 6.0 Beta on MAC Mini 6.0 and it's stuck on "Initializing Scheduler" it did that even with 5.5U2.
The correct answer is: If you bought it from a licensed vmware partner its propaply good, if not then its propaply not. Quick check is to see if Your License is Connected to Your Company in myvmware.
The vmware-mount is to be used to mount diskfiles from an inactive VM, not to have access to live files. A live VM has file Locks on the disk files.
Solution is to use a share accessible for both the VM and client machine.
Use private VLAN and only let the firewall VM have the promiscious setting. This separates the other VMs from talking outside the firewall
Hello,
are you using iSCSI storage by any chance? Would it be possible for you to disable the NICs on the physical, or the ports they are plugged to at the switch level? If everything fails I suggest wiping the SD card with already present ESXi installation and install a fresh one - saves you a lot of headache.
Thanks guys, that must be it. even though I haven't tested it yet. I did have problem with writing speed from 2003 terminal server , but no other server 2008 or windows 7 workstations.But it seemed only one way was affecting, not reading, just writing. Then I just added the another NIC with vmxnet3 to 2012 and seemed fixed it.
I have a DMZ with 7 hosts, and I want to P2V all of them, and put the converted VMs on an ESXi server that is on my protected LAN. (in other words, the ESXi management network address is an internal address). Now, half of my DMZ machines are multi-homed, so they have a NIC on my protected LAN. Those should be easy to convert, and tell the converter to put the VM on a datastore on the ESXi host I want.
The problem is, I have 3 DMZ machines that only have 1 NIC (they get traffic through a load balancer, which does have multiple NICs, so they don't need multiple NICs themselves). So how can I P2V these hosts?
I have an unused NIC on my target ESXi host. I can activate it, and give it a DMZ address. Then these 3 machines should be able to see the ESXi host and I can put the VM there. Correct?
So my question is:
Can I have multiple management networks assigned for an ESXi 5.5 host? I'm thinking:
Currently: 1 standard vSwitch on my protected LAN for the management network
Proposed: 1 standard vSwitch (a new one) with a NIC on my DMZ
I *think* that should allow the DMZ machines to see this ESXi host as a target for P2V. When everything is converted and running, I would remove this DMZ NIC from the ESXi host.
Am I right? Will this work? Have I left out anything? ESXi won't care that I have multiple management networks defined (with different addressing schemes), as long as each is on their own vSwitch, and using different NICs, and different VLANs.
Thanks
Make sure that node interleaving is disabled in the BIOS memory settings.
HawkieMan,
Thank you for your reply. Fileshare? This is Windows 3.1... Do you mean on the ESXi box?
What type of file share are you imagining? If this is a Linux NFS share, windows cannot access this natively so this would be out. Second, I don't think you can do a windows share from an ESXi machine.
The Virtual Machine is off, I'm aware that you cannot mount a running VM
Any other ideas?