Quantcast
Channel: VMware Communities: Message List - VMware ESXi 5
Viewing all 61617 articles
Browse latest View live

RDM

$
0
0

Can we map a RDM in more than one virtual machine a a given time?


Re: RDM

Re: Controlling LUN queue depth throttling in VMware ESX / ESXi (1008113)

$
0
0

Hi,

You need to reboot your server, this is device issue but queue depth should be managed on hosts.

Read this KBs to define exact issue by using SCSI sense codes:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=289902

VMware KB: Understanding SCSI device/target NMP errors/conditions in ESX/ESXi 4.x and ESXi 5.x

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1029039

Seems, your SCSI device queue is full. You should check your storage port capacity and also you need to check queue depth on ESXi and your HBA driver.

"VMK_SCSI_DEVICE_QUEUE_FULL (TASK SET FULL) = 0x28

vmkernel: 1:08:42:28.062 cpu3:8374)NMP: nmp_CompleteCommandForPath:2190: Command 0x16 (0x41047faed080) to NMP device "naa.600508b40006c1700001200000080000" failed on physical path "vmhba39:C0:T1:L16" H:0x0 D:0x28 P:0x0 Possible sense data: 0x0 0x0 0x0.

This status is returned when the LUN prevents accepting SCSI commands from initiators due to lack of resources, namely the queue depth on the array.

Adaptive queue depth code was introduced into ESX 3.5 U4 (native in ESX 4.x) that adjusts the LUN queue depth in the VMkernel. If configured, this code will activate when device status TASK SET FULL (0x28) is return for failed commands and essentially throttles back the I/O until the array stops returning this status.

For more information, see Controlling LUN queue depth throttling in VMware ESX/ESXi (1008113)."

2 datastores are unmounted and cannot be re-mounted

$
0
0

Currently we have 2 Datastores which we cannot bring online in our lab, they were working yesterday. I thought that their partition info might have gotten over-written by a windows machine but Ive checked the partitions and they report ok after following this article. http://kb.vmware.com/kb/2046610

 

Both partitions are coming from the same netapp aggregate and controller as working datastores and are mapped to the same iSCSI initiators. When you attempt to mount them in vsphere client you get the following error:

Call "HostStorageSystem.MountVmfsVolume" for Object "storageSystem" on ESXi 172.16.210.19" failed.

Operation failed, diagnostics report: Sysinfo error on operation returned status: Timeout. Please see the VMKernel.log for detailed information.

 

In the VMkernel log you just get this:

 

2015-01-16T10:40:36.584Z cpu16:8208)ScsiDeviceIO: 2331: Cmd(0x4124003b2880) 0x16, CmdSN 0x1cf0 from world 0 to dev "naa.60a9800043346c626b4a793543343973" failed H:0x5 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

2015-01-16T10:40:36.584Z cpu6:10287)LVM: 11710: Failed to open device naa.60a9800043346c626b4a793543343973:1

2015-01-16T10:43:33.396Z cpu0:8224)NMP: nmp_ThrottleLogForDevice:2319: Cmd 0x1a (0x41244237c740, 0) to dev "mpx.vmhba35:C0:T0:L0" on path "vmhba35:C0:T0:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE

2015-01-16T10:43:33.396Z cpu0:8224)ScsiDeviceIO: 2331: Cmd(0x41244237c740) 0x1a, CmdSN 0x1cfd from world 0 to dev "mpx.vmhba35:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2015-01-16T10:48:33.401Z cpu28:8220)NMP: nmp_ThrottleLogForDevice:2319: Cmd 0x1a (0x412441596e00, 0) to dev "mpx.vmhba35:C0:T0:L0" on path "vmhba35:C0:T0:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE

2015-01-16T10:48:33.401Z cpu28:8220)ScsiDeviceIO: 2331: Cmd(0x412441596e00) 0x1a, CmdSN 0x1d10 from world 0 to dev "mpx.vmhba35:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

 

 

Device naa.60a9800043346c626b4a793543343973:1 is VMstore2

 

/var/log # vsish

/> cd vmkModules/lvmdriver/unresolved/devices/

/vmkModules/lvmdriver/unresolved/devices/> ls

0#naa.60a9800043346c626b4a793543343973:1/

0#naa.60a9800043346c626b4b315142424361:1/

/vmkModules/lvmdriver/unresolved/devices/> cat 0#naa.60a9800043346c626b4a793543343973:1/properties

Unresolved device information {

   VMK name:naa.60a9800043346c626b4a793543343973:1

   LV name:536215f8-a64160c8-06e1-d4ae52901623

   LV State:1

   VMFS3 UUID (First extent only):536215fb-a58b7b2c-17af-d4ae52901623

   VMFS3 label (First extent only):NetApp_VMstore2

   Reason: Native unmounted volume

   Extent address start (MB):0

   Extent address end (MB):2096895

   Volume total size (MB):2096896

}

/vmkModules/lvmdriver/unresolved/devices/

Re: 2 datastores are unmounted and cannot be re-mounted

Re: ESXi 5.5 with multiple management networks (1 DMZ, 1 not) - possible?

$
0
0

Your plan should work, but it's not how I'd do it. That said, my way may not be the best way... I just wanted to have just one bonded network to the host and run everything through those ports, and the way I picked was to use VLAN's.

 

I thought it was easier to just make the DMZ network available as a tagged VLAN on the existing hardware switch the ESXi was connected to, and add another machine port group network inside vSwitch0, with a VLAN id attached to the port group. I called that DMZ Network and made sure the VLAN in the switch was connected to the DMZ network.

 

This also opens up for adding more VLAN's and more networks easily inside your ESXi, just make them available tagged on the physical switch port and add another port group tagged with that VLAN in vSwitch0.

 

You can then select which network/port group is applied to which virtual NIC for multi-homed virtual machines.

 

Of course if you don't foresee ever needing to run stuff inside your virtual environment but connected to the DMZ, it makes little sense to start configuring VLAN's and stuff.

 

You also have the option to do the P2V to another location, and then just transfer the VM files to your datastore later? You don't have to P2V straight to the virtual environment.

Re: Oracle RAC Cluster-Across-Box using VMFS over NFS

$
0
0

Re: ESXi 5.5 with multiple management networks (1 DMZ, 1 not) - possible?

$
0
0

There is no problem with your servers with 1 NIC, you can convert them and then put them on ESXi server, then configure your ESXi network on a switch that connected to load-balancer.

Also you can have more than one management networks on ESXi and it depends to your physical and VLAN configurations.

But you don't need put your management network in DMZ, you can use your internal network for management and assign DMZ VMs to another vSwitch or port-group with another VLAN.


Re: 2 datastores are unmounted and cannot be re-mounted

$
0
0

Yup I see that now:

VMK_SCSI_HOST_ABORT = 0x05 or 0x5 = This status is returned if the driver has to abort commands in-flight to the target. This can occur due to a command timeout or parity error in the frame.

 

So if I am reading this right the SCSI command aborted because of something the NetApp did?

Re: ESXi 5.5 with multiple management networks (1 DMZ, 1 not) - possible?

$
0
0

OK. I don't plan on keeping a management network on the DMZ; I only need it for long enough to run the converter and have it available as a place to put the converted VM. Then I will remove that vSwitch completely.

 

The VMs will be configured to go thru a virtual load balancer by Brocade. That's all later, tho - at the moment, I am only interested in how to make that ESXi server available as a target for the converter.

Re: The Numbers Following the Device ID on esxtop Output

$
0
0

Jon,

 

Thank you for your answer.

However, it wasn't the World ID for the VM, since I only have one VM on this datastore but the esxtop output shows multiple (roughly 30) "Physical Disk Per-Device-Per-World" columns.


But anyway, very appreciated with your help.


Regards,


- Motoo

Re: one reboot for vmware tools and vmware hardware upgrade?

$
0
0

You can enable VMware Tools upgrade during power cycle and then shutdown VM, then upgrade VM hardware and finally, power on VM, wait for upgrading automatically.

After all steps have been done, you can un-check "VMware Tools upgrade during power cycle".

Re: ESXi 5.5 with multiple management networks (1 DMZ, 1 not) - possible?

$
0
0

For a temporary connection, I'd go with your original plan. New vSwitch, new portgroup "DMZ Network", connect the unused nic, connect said unused nic to the DMZ network: Should work.

Re: one reboot for vmware tools and vmware hardware upgrade?

$
0
0

does this mean vm hardware will be upgraded before vmware tools? IS there a

problem with this?

 

On Fri, Jan 16, 2015 at 5:00 AM, DavoudTeimouri <

Re: one reboot for vmware tools and vmware hardware upgrade?

$
0
0

Or you could just do the VMTools update after the hardware change using Automatic upgrade but in the advance options area pop

 

/s /v/qn ADDLOCAL=ALL REBOOT=ReallySuppress

 

This will do a tools update without reboot.


Re: RDM

Re: ESXi 5.5 with multiple management networks (1 DMZ, 1 not) - possible?

$
0
0

You could power down the machine, move the lan cable to an internal switch and boot up with a cold clone CD, clone that on the internal network and go from there. Cold clones are generally cleaner anyhow although VM don't make tools anymore for it.

Problems with Preferred Fixed Path not staying on the fixed path even though no network/nic outage

$
0
0

Hi

 

 

We have a problem with the preferred fixed path not working properly to our "active/active" netapp storage, we set a datastore LUN preferred path and all of a sudden the host will go to a partner path with no warning or outage, then this will become the "*" preferred path, we have not changed this to become the preferred path and have to change it back as it's the non-preferred path general. Our datastores sit on a dual headed netapp.

 

 

We are using FCoE to storage & ESX 5.5 RU2

 

 

Anyone seen this and can offer help?

 

Cheers in advance.

datastore inactive status after migrating vmkernel

$
0
0

Hi All,

After migrating a vmkernel port for NFS Storage from a Distribute Switch to a Standard Switch, I'm unable to vMotion VMs to the ESXi host to were I did the migration. The datastore goes to inactive mode and the operation fail. Right after the operation fails, it does a recompute datastore groups and it comes back to active. If I create a vNIC in a test server and added to the NFS Port Group, I'm able to use the NFS Network no problem. I also notice that happens when I tried to browse the Datastore as well. the vNICs are all configure the same in compere the dSwitch and the sSwitch. Any help will be appreciate.

 

Thanks!

Re: ESXi 5.5 with multiple management networks (1 DMZ, 1 not) - possible?

$
0
0

Unfortunately, this does not work. In the new vSwitch, if I change the gateway to be the DMZ gateway, I lose connectivity on the existing management NIC. And to make it even worse, I don't even have connectivity on the DMZ - the machines there can't ping the ESXi host. All the machines on the DMZ can ping each other, but they can't ping the ESXi host.

 

Looking at Configuration, Network Adapters, I *do* see the correct observed IP address range on the NIC that is in the new Management vSwitch I made. So I guess I have some basic level of connectivity, if the ESXi host can see traffic on it.

 

So I am back to being stumped. I have a ticket open with VMware, waiting to hear back from them.

Viewing all 61617 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>