Thanks for reply ..
I have replaced the HDD & LAN card ...Reinstalled Esxi again..
Will problem resolved ??
Vaibhav
Thanks for reply ..
I have replaced the HDD & LAN card ...Reinstalled Esxi again..
Will problem resolved ??
Vaibhav
Tape backup time increased 30%.
Using Backup Exec 2010 R3 through an
Adaptec AHA_29320LPE PCI based SCSI 320 card to an HP MSL 2024 G3 serices tape libarary with a HP Ultrium 4 drive.
I have updated firmare for all hardware and updated BE 2010 to latest patches.
When running ESXi 4.1 with Windows 2003 R2 running the adaptec driver backup times were adequate. After upddating to ESXi 5.5 the same WIndows 2003 R2 server does not see the Adaptec card but a generic scsi card. This is due to the AHS2030LPE not being on the HCL list. I see no other 320 scsi card on that list, so do not see a way to swap the card to a supported one.
Initially the upgrade defaulted the VM to use the LSI controller. I changed this to the paravirtual as I understand it has imporved performance.
What can I do to restore the performance of the backup? Will scsi passthrough help? Will something else work?
We've managed to reverse engineer the script and create a scripted process that will take any 11 sp1, sp2, or sp3 SLES and migrate it to SLES for VMware, with only a single reboot and supplying the proper VMware SLES license. If anyone else in the future needs to accomplish this and would like to contact me please feel free.
None of the things you mentioned were required. Please read the KB I linked you (the e1000 link).
You are still not following me. I'm talking about the entitlement of SLES for VMware that comes with the vSphere licensing.
SUSE Linux Enterprise Server (SLES) for VMware: Enterprise Linux | United States
The link I provided previously migrates an install of SLES for VMware to SLES licensed by Novell. It does not remove drivers, SLES includes all of those in the kernel and has since SP1. I have hundreds of virtual machines I want to do the reverse to.
P.S. David - stop editing my posts because you are butt hurt about not understanding Linux. Instead of spending so much time trying to accumulate posts on a forum, fire up a VM and learn Linux.
Just as an FYI, we have scripted this migration from SLES licensed through Novell to SLES for VMWare, no reinstall necessary and it works on SLES 11 sp1, sp2, and sp3.
"...I`m looking for reason why VM goes in freeze state as soon as my datastore goes out of free disk space..."
I do not understand. You have the answer, so what more you want to hear? Where should your VM write if your datastore is full?
I am not sure exactly what happen, but... I ended up rebooting the first host in the cluster and after it was able to pass jumbo packets.
Now on to the real fun of dealing with the issue that brought this to light in the first place...
Thanks all for the help!
The main two reasons for such issues are:
André
I am receiving the following message after attempting to connect a host to vCenter. The host was previously connected to this vcenter and its hostname was changed. It was removed from vCenter. Now it cannot be readded. Here is the message:
"Permission to perform this operation was deined. You do not hold privilege "System > View" on folder ""
For out of the box HA and FT support, you really need some type of shared storage and an installation of Virtual Center Server. You could also try a third party tool (something like Veeam) to set up replication of your VM's, but best would be to get some kind of shared storage in for the out of the box support. If it is really just starting the VM's on the other host incase the primary one fails, you might be best off with a decent backup solution.
When you connect to a vCenter server via PowerCLI you use the Connect-VIServer command (I saw the script misses that command though) and once connected to vCenter, you can get every VM with something like Get-VM. In the clone, you do not need the specify the source host of the VM, the script will look it up via the connected vCenter server inventory.
~ # esxtop -b -c .customFile -d 10 -n 2 > esxtop.csv
Works fine for me, on VMware ESXi 5.5.0 build-1331820. Contents of the .customProfile file:
~ # cat .customFile
AbcDeFghij
aBcDefgHijKLmnOpq
ABCdEfGhijkl
ABcdeFGhIjklmnop
aBCDEFGH
AbcDEFGHIJKLmno
ABCDeF
ABCDe
5c
I am quite curious where you're at with this now as its been 8 months'ish since you abandoned VMware for this application. I don't fault you for the decision given the problem at hand and the application not working, but you do seem to be very "its my problem its everyones problem!" and that is certainly not the case.
When I try to cat the files in putty it just throws garbage back to me but I am guessing you are looking to see what version of syslinux I am using. I downloaded version 3.86 of syslinux and took the mboot.c32 and mboot.c32 from the com32 folder.
When I modify the pxelinux.cfg/default file to:
default install
label install
kernel esx-5.5/mboot.c32
append -c esx-5.5/boot.cfg
I get the error:
Loading -c....failed!
No files found!
The boot.cfg file has all the right permissions and so do the rest of the files. I tried the prefix command in the boot.cfg but thought maybe that was not working and put the full path to each file in the boot.cfg. No luck still.
That's embarassing. I was a few versions behind, just updated to the latest and now it works
Thanks for all the help.
Andy
Hey -- I should have mentioned -- we are not using GPU for rending, it's purely CPU and Memory only. Even disk isn't such a big concern.
weinstein, that's exactly what I'm looking at doing. However, my question is will it work efficiently? I don't want to commit to an expensive hardware purchase only to realise it dosen't work as intended.
Interesting thread that has been revived here. I too would be curious how things are going. I agree that there are inherent problems with multicast in some situations where it just flat out dies at the vSwitch. The advice given by ChadAEG regarding trying the Cisco Nexus 1000v is great advice. I am a huge fan and long time user of the 1000v. Any multicast issues I have faced were easily resolved when using the 1000v. Further, the support integration between VMware and Cisco is amazing.
If all else fails, the best way to defeat a multicast issue is PCIe NIC passthrough to the VM. Give that a try as that avoids the vSwitch completely.