georges' blog

October 31, 2014

Please stop posting pics with black bands at the top and bottom…

Filed under: Uncategorized — kendall @ 9:51 am
Look you’ve got to stop posting these pics on Facebook–the ones with the black bands at the top and the bottom. (This also applies to Instagram, Twitter, Tumblr, and any other place smart phone pics are accepted.) Apparently you learned how to take a screen capture with your phone, but you didn’t learn how to save a photo from email, text, or the web. It isn’t that hard.
How to save a photo from email or Safari on an iPhone
Simply tap and hold the picture until the menu appears that says “Save Image”. Select that. Now the picture is in your camera roll–the full size original image, without those black bands at the top and the bottom.
To save a picture from messages on an iPhone
Just a tad bit different. Click on that picture in messages you want to take a closer look at. See that icon in the upper right-hand corner of the screen, the box with the arrow coming out of it. That is the sharing button. Select that and you will get to choose from a whole options including save image (to your camera roll) or even post directly to Facebook or Twitter.
Now why might you want to go to this added trouble to save the photo rather than just take a quick screen capture? The main reason is that you will save (and then repost) the full quality image. When you take a screen capture you are likely creating a lower quality duplicate of the photo in question. When that image is reposted to Facebook or where ever it will not look as good as the original and if enlarged will appear pixelated. Not to mention the dumb black bands at the top and the bottom.
If you insist on continuing to take screen captures, then at least crop out the black bands before posting.
How to crop a photo on iPhone
To crop a photo on an iPhone, simply open that photo in your Photos app. In the upper right-hand corner of the window you will see the word “Edit”. Select it. Then in the lower right-hand corner select the cropping tool. Adjust the size of the crop area and select “Crop”.
stop_this

October 22, 2013

Logging in Automatically and Other Windows 8.1 Tweaks

Filed under: Technology,Uncategorized — kendall @ 1:13 pm
I run Windows 8.1 as a virtual machine in VMware Fusion.  I let the host operating system take care of security, specifically:

  • Require login at boot and wake from sleep
  • Screensaver enabled after 5 minutes
  • Require password at screensaver
  • Start screensaver hot corner

If I lock my host OS, then you can’t get to the guest virtual machines.  So, I don’t want additional screen savers or passwords on my virtual machines–life is too short to enter your password too often.

With previous versions of Windows, including Windows 8 you could set your computer to automatically login using the netplwiz.exe or ‘control userpasswords2’ utility.

After upgrading to Windows 8.1 the “Users must enter a user name and password to use this computer.” configurable option was no longer present for me.

To fix this it was necessary to go to Control Panel > User Accounts and select “Reset Security Policies” (see image to the right).  Then you could use netplwiz or ‘control userpasswords2’ to setup automatic login.  Alternatively you can use the Windows Sysinternals Autologon utility to configure automatic login.  Reboot for changes to take effect.

You must also disable the lock screen–not only don’t I want to enter my password to login, I especially don’t want to have to swipe up a lock screen.  This feature only seems worthwhile on a tablet, but even on a tablet that’s debatable.  To disable the lock screen from the command prompt, launch gpedit.  Navigate to Local Computer Policy > Computer Configuration > Administartive Templates > Control Panel > Personalization.  Double click or right click and select “Edit”.  Select the radio button to enable this option.  Reboot.

A couple of other desirable tweaks to Windows 8.1:

  1. Turn off the screen saver and go to sleep features in power settings.
  2. Set the Metro background to the same image as your desktop.
  3. Go directly to the desktop when you login.

I’ll let you figure out how to turn off the screen saver and sleep features on your own and also set your desktop wall paper to also be your Metro start menu back ground.   Now that Windows 8.1 is feeling more like a desktop OS again, I find it desirable to go directly to the desktop on login.  To enable this, launch the desktop from the Metro menu.  Right click the taskbar and select properties.  Select “When I sign in or close all apps on a screen, go to the desktop instead of Start.”

user accounts

Reset Security Policies

netplwiz

User Account Settings

gpedit

Disable the Lock Screen

goto_desktop

Login Directly to Desktop

October 6, 2012

My Smart Phones through the Ages

Filed under: Technology — kendall @ 11:01 pm
I recently upgraded my iPhone 4 to the latest iPhone, the 5. When Claudia and I upgraded to iPhone 4’s we passed our 3G’s to our two oldest kids to use as iPods. We just recently did a similar thing when we upgraded to iPhone 5’s. We passed our iPhone 4’s to our two youngest and bought our oldest a 4S with actual voice, text and data. Rather than keeping our 3G’s around collecting dust, I decided to sell them to gazelle.com along with a couple of first generation iPod touches.  I figured I should cash them in while they are still worth some cash. Before I dropped my iPhone 3G’s in a mailer box never to be seen again, I thought I’d snap a couple of pics, including one of all of my smartphones since I started using them eight years ago.  I used a smart phone before everyone used a smart phone. My first was an HP iPac circa 2004.  It is interesting to observe how far smart phones have come in just eight years.

I am not making any definitive statements about smart phones. I am not suggesting that any of these devices are representative of its particular vintage, or that these observable trends are absolutely representative of the industry.  While these trends may not be absolutely accurate of the industry, I suspect that they are generally true of it.  It is interesting to observe the trending associated with this arbitrary sampling–what has happened to the smart phones I have used over time. So, with that disclaimer, what can I say?

Smart phones have become much faster.  My current smart phone’s CPU is essentially 20 times faster than my first.  The connectivity speeds have increase as well.  With respect to WiFi, my first smart phone was only capable of 802.11b or 11 Mbps.  In the middle they became 802.11g capable or 54 Mbps.  And now they are 802.11n or up to 600 Mbps capable.  That is equivalent to 50x increase.  With respect to wireless or cellular data, they have gone from 2g or 1 Mbps maximum to 4g or LTE with data rates as high as 300 Mbps peak.  That is equivalent to a 300x increase!

It is not surprising that storage capacities have grown as we seek to carry more and more data in our pockets.  While you can get the latest devices with higher memory configurations that in the past, I have found that 16 GB is about the right amount of storage for my needs.  However, with a higher resolution photo and video camera in the iPhone 5 I’ll have to see if I can still easily live with 16 GB.  Predictably, random access memory has also increased as these phones have more processing to do with increasingly demanding applications.  The interesting trend you can observe here with these phones is that the memory has doubled with nearly each new phone I purchased for total 5-fold or 16x increase.

They have become smaller.  My current smart phone is now about half the weight and a third of the thickness of my first smart phone.

   
Make & Model HP iPac h6325 iMate JAMin Apple iPhone 3G Apple iPhone 4 Apple iPhone 5
Year 2004 2006 2008 2010 2012
Operating System Microsoft Windows Mobile 2003 PocketPC Microsoft Windows Mobile 5.0 PocketPC iOS 2.x – 4.2.1 iOS 4.x-iOS 6.0 iOS 6.0
Dimensions 137.6 x 74.6 x 20.8 mm 108 x 58 x 18.2 mm 115.5 x 62.1 x 12.3 mm 115.2 x 58.66 x 9.3 mm 123.8 x 58.6 x 7.6 mm
Weight 190 g 150 g 133 g 137 g 112 g
Display size: 3.5 inches diagonal, 53 x 71 mm, 3763 mm2
resolution: 240 x 320 px
pixels: 76,800
size: 2.8 inches diagonal, 42 x 57 mm, 2394 mm2
resolution: 240 x 320 px
pixels: 76,800
size: size: 3.5 inches diagonal, 53 x 71 mm, 3763 mm2
resolution: 480 — 320 px
pixels: 153,600
size: 3.5 inches diagonal, 53 x 71 mm, 3763 mm2
resolution: 960 — 640 px
pixels: 614,400
size: 4 inches diagonal, 52.3 x 90.6 mm, 4738 mm2
resolution: 1,136 — 640 px
pixels: 727,040
CPU 168 MHz ARM925T 200 MHz ARM926EJ-S 833 MHz (underclocked to 600 MHz) ARM Cortex-A8 1 GHz (underclocked to 800 MHz) Apple A4 1600 MHz Apple A6 Dual Core
Memory 64 MB 64 MB 256 MB 512 MB 1 GB
Storage 64 MB ROM, SD card up to 1 GB 128 MB ROM, SD card up to 2 GB 8 or 16 GB 8, 16, 32 GB 16, 32, 64 GB
Cellular Data 2G 2G 3G 3G LTE
WiFi 802.11b 802.11b/g 802.11b/g 802.11 b/g/n 802.11 b/g/n
Cameras Rear: 640×480 pixels (0.3 MP) Rear: 2 MP, 1600×1200 pixels Rear: 3.0 MP,VGA video at 30 frame/s Rear: 5.0 MP,720p HD video at 30 frame/s
Front: 0.3 MP, 480p VGA video at 30 frame/s
Rear: 8.0 MP,1080p Full HD video at 30 frame/s
Front: 1.2 MP, 720p HD video at 30 frame/s

Here is a stack of devices I sent to gazelle.com.

Boxing my devices to ship to gazelle.com.

All of my smart phones from 2004 to the present.

All of my smart phones from 2004 to the present. Elevation shot.

Observe the geometric progression of smart phone specs over time.

September 9, 2012

Keeping All Your Files Synced Across All Your Cloud Storage Providers

Filed under: Technology — kendall @ 6:03 pm
I’ve been checking out a number of cloud storage providers–quite a few actually–all together around a dozen. It is easy to say that not all cloud storage services are created equal. Not all have sync utilities. Not all have clients for both Apple OSX and Windows or for both iOS and Android. Not all clients allow you to define precisely the folders to be sync’ed or even allow for more than a single folder to be sync’ed…



A level playing field. I’m evaluating several of these and I want to, as much as possible, give them all an equal footing. I guess my main interest was that I wanted to make sure that they were all syncing the same folders and files–more or less automatically. The best way to do that would be for all of the clients to use the same folders as the sync folders. That way if I put a new file in a given folder, it would automatically be synchronized with all of the services simultaneously. The trick is that it is not trivial to direct all of the clients to sync the same folder. It’s not easy, at least not for all or even most of them. Of the services I’m checking out, only Bitcasa and SugarSync allow you to define the exact folders to be sync’ed–these two are probably the underdogs in this competition, though both have great products. The other four services in question are Dropbox, Box, Google Drive, and Microsoft’s SkyDrive–all much bigger players in the cloud storage space. But all of these want to create a folder in a default or user defined location for the cloud files. Left to their own devices these four services would create four folders in, for instance, your user or ~/Documents directory. While it would be easy enough for all four of these folders to be placed in the same location, this is still an unacceptable situation. In order to sync a single file to all four services, I would have to copy any given file to all four directories. And to delete the file from all four services, I would have to delete it from all four directories. It would be difficult to manually guarantee consistency across all four folders and also it would be difficult to judge the responsiveness of the service as not all folders would get the file or files at precisely the same moment. One could use rsync, a well-known Unix utility to keep these four folders in sync, but that does not address the problem that now you have four copies of the same file on your system, consuming your disk space that much faster. Uncool.

Enter symbolic links. Unix has a useful feature called symbolic links, or symlinks, that is designed to address this issue and this was my first strategy for solving this problem… I have a 25 GB partition I created on my Mac Book Pro’s hard drive for my cloud storage. I created this partition specifically for Microsoft’s SkyDrive. SkyDrive requires that your partition not be case-sensitive, for compatibility with Windows. I also made it 25 GB, the same as my SkyDrive quota, that way what ever fits on the partition should fit in the cloud. So, starting in my SkyDrive partition, which I have renamed CloudStorage, I created a hidden folder named /.sites. You will need to create this folder in Terminal as finder does not allow you to create files that start with a period. Within this folder I created a separate folder for each of my cloud storage services. In the root of my CloudStorage folder I created folders for Documents, Photos, Music, etc. that would be shared across all six services. I then created symlinks for each of these folders in each of the cloud services folders. I recommend using SymbolicLinker to do this as it takes the brain work out of getting the syntax correct and you also can create all of the symlinks simultaneously. For instructions on downloading, installing and using SymbolicLinker go to http://seiryu.home.comcast.net/~seiryu/symboliclinker.html.

At this point, a couple of lessons learned:

  1. Bitcasa and SugarSync can map to any folders, so mapping them to this separate folder and symlink structure was a waste of time and introduces complexity and a potential point of failure. It seems like both Bitcasa and SugarSync may have navigated symlinks, but again, why take the chance. It was unnecessary. So, I map these services directly to the actual folders and not symplinks
  2. Of the other four services, only Dropbox will follow symbolic links.

Halfway there. So, I still need a solution for the three remaining services which will not allow you to define the exact folders to sync and will not follow symlinks. At this point I spent too much time trying to get hard links to work. My advice to you, don’t bother with hard links. There is a lot of stuff on the web about hard links. I could not get them to work, at least I could not get them to work for folders, which is what I needed. This is the expected behavior. Hard links are not supposed to work for directories, so while they may (and should) work for files, this does not solve our problem. I also found a lot of blogs saying that symlinks would work for Google Drive and SkyDrive. While this may work in Windows, there are fundamental differences in the way Windows and OSX handle simlinks, and I could not get it to work for these or for Box.

Flip it. I had thought of reversing the folder structure… What I mean is, if SkyDrive, for instance, cannot navigate symlinks, then I will put the real folders in the SkyDrive folder and symlinks in the root folder. I will direct Dropbox’s symlinks to the real folders in the SkyDrive folder, and direct Bitcasa and SugarSync to sync the real folders in the SkyDrive directory. Several other bloggers had suggested the same thing. The problem with this is is that it only solves the problem for one of the three remaining services.

I then slept on the problem. I was really wanting to stick with SkyDrive as they offer the highest quota for free of any of the six services in question here. But at the same time, there is a lot of interest in Box at my work because of a great deal they offer to higher ed institutions through a partnership with Internet 2–so I didn’t want to cut them out of my evaluation. Then using Google Drive is important for its integration with Google Docs. So, I could not just write off any of the three remaining services. Besides I hate giving up. Then it came to me, literally in a dream… I would nest the folders of the three remaining services. While not neat or elegant, by putting the real folders inside the SkyDrive folder, which would be inside the Google Drive folder, which would be inside the Box folder, all of the files in the sync folders should get synced across all six services simultaneously. When I say not neat or elegant, what I mean is that at Google Drive, all of my folders are in a folder labeled SkyDrive. At Box my folders are two folders deep in Google Drive and SkyDrive folders. This is not ideal, but the important thing is that my files should have an equal opportunity to get synced at all six service providers.

In summary. This is what I did and you can too:

  1. Create a partition for your cloud files, using Disk Utility. I called mine [CloudStorage]. Make sure it is not case-sensitive for compatibility with SkyDrive.
  2. Create a hidden folder in the root of [CloudStorage] for your folders. I named mine “.sites”.
  3. Run box and define the sync folder as [CloudStorage]/.sites/
  4. Run Google Drive and define the sync folder as [CloudStorage]/.sites/Box Documents/
  5. Run SkyDrive and define the sync folder as [CloudStorage]/.sites/Box Documents/Google Drive/. SkyDrive should create your default folders. If not, create whatever default folders you’d like.
  6. Run Dropbox define the sync folder as [CloudStorage]/.sites/
  7. Create symlinks for the SkyDrive folders in the Dropbox folder and in the root of [CloudStorage].
  8. Define the sync folders for Bitcasa and SugarSync as the SkyDrive folders.

That’s it. If you place a file in any of the folders(symlinks) in the root of [CloudStorage] it should sync automatically to all six services. Likewise, if you delete a file from any of these folders, it will be removed from all six services. Similarly, if you upload a file or delete a file using a mobile client or a web interface for any of the six services it will be sync’ed to your local machine and all five other services, provided your computer is on and you are logged in. So, by simple act of placing a file in a folder, you can now back that file up six times over, and retrieve it immediately or later from six different places.

An interesting note. I was surprised to find that adding and deleting files using any method did not cause any problems. I thought that having so many services all checking the same folders for changes might create opportunities for one service to step on an other–to delete a file or overwrite a file–I have not seen that happen.

Update, October 24, 2012: I’ve done this with a couple more services, Pogoplug and CX, using these techniques. So’ve gotten up to eight different services sync’ing in this manner. Eight services working like this may be absurd–six may have already crossed that line. Practically, there may be a point where the services start stepping on each other though I haven’t seen that yet. The point is that you should be able to do this with the two, three or four services you prefer to use.

This is what the folder structure looks like.

The symlinks look like folders and the rest of the directory structure is hidden.

SymbolicLinker in action.

August 30, 2012

VMworld 2012 Takeaways, Day 4

Filed under: Technology,VMworld 2012 — kendall @ 1:36 pm
The Thursday keynote at VMworld generally highlights some inspirational technologies and their creators, generally not at all related to virtualization. In the 2012 Thursday keynote, Genius Machines, the speakers talked about autonomous algorithms that run the world, humanoid robots, and self-driving cars… In short, we are closer to the robot wars than ever.

Kevin Slavin gave a talk about how algorithms run our world. It was very similar to the Ted Talk he gave July of 2011. Check it out, it’ll terrify you.


Dr. Dennis Hong, Director, RoMeLa (Robotics and Mechanisms Lab), Virginia Tech University, talked about the why of creating humanoid robots… and why make them play soccer. Chris Ormson of the Google self-driving car project also talked about making driving safer, more efficient, and more productive… and also making it accessible for people with disabilities like the blind. Check out this video to see what I mean:


The one session I attended apart from the keynote was Virtualizing SQL 2012: Doing It Right, important for our organization. Here are the take aways:

  1. You can do it, because you can create monster VMs that give you all the resources you will need.
  2. Yes, do it in tier 2 first.
  3. Collaborate, don’t keep it a secret.
  4. Do basic throughput testing, SQLIO/IOMETER, prior to deployment
  5. Unattended installs, SQL sysprep
  6. The OS and databases don’t know they are virtualized

Architecting for Performance

  1. Design for workload types or mix workloads
  2. Pay attention to storage types
  3. Understand your physical infrastructure’s limitations
  4. Use VMFS and NOT RDM. The performance difference is statistically insignificant but you get more features with VMFS.
  5. Use Thick Provisioned, Eager Zeroed disks for best performance.
  6. >80% of problems with virtualization happen at the storage layer
  7. Avoid over-committing CPU resources. Use one vCPU per core. Do not count a hyperthread as a full CPU core.
  8. Some important memory stuff. Check the slides. Sounded like there were a bunch of important memory considerations and settings on both the server and host.
  9. Mike says, “Avoid shares and reservations.”
  10. Jeff says, “Don’t use limits.”
  11. Don’t turn off ballooning.
  12. Enable jumbo frames.
  13. SQL best practice analyzer

Here’s the slide deck. (But you’ll need to sign in.) A lot of slides. A lot of info. Good stuff.

I took the VCP exam at VMworld.  I recommend doing this because it is half price.  I don’t recommend doing this because when will you study?  It worked out for me.  I passed.

August 29, 2012

VMworld 2012 Takeaways, Day 3

Filed under: Technology,VMworld 2012 — kendall @ 11:47 am
Day three started off with no keynote. That is probably because there are a lot of parties Tuesday night, so a lot of reasons to sleep in. Personally, my first session of the day started at 9:30, What’s New with vSphere Automation. They were talking about new commands in esxcli and PowerCLI. The PowerCLI guy sounds a bit like Ricky Gvais, but not so funny. Takeaways:

  1. 82 new commands in esxcli.
  2. A bunch of snap-ins, over 270 commandlets in PowerCLI

I literally get to take away a PowerCLI and esxcli quick reference poster. Kinda cool since there isn’t much 5.1 documentation at vmware.com yet.

Session 2, VM Scare? Heterogeneous Virtualization’s Impact. This guy is not into heterogeneous hypervisors. He’s from Gartner, so it isn’t because he’s trying to sell us something. In fact, Gartner makes their living off complexity–if the world is simple, what do we need consultants for? He’s basically saying there aren’t good reasons for using multiple hypervisors.

  1. He says if you aren’t moving toward standardization, then you are not doing things enterprise; you aren’t doing things like a service provider; you are doing things like a small IT shop. Those are fighting words. But I think he has a point.
  2. If you are trying to avoid vendor lock-in for cost reasons, you need to understand there is a cost associated with switching for one hypervisors to another. Also, there is inherent waste in having multiple hypervisor silos.
  3. You will have to have multiple management tools, because you will need vendor specific tools to do some advanced operations.
  4. Public cloud doesn’t save you money. The vm is cheap, the management and DR is what costs you.
  5. Do not use different hypervisors for test/dev and production. It defeats the purpose and causes problems when moving from test/dev to production.
  6. Check out Gartner’s reference architecture for cloud management SaaS.

I got to ask about how far this recommendation extends to VDI, and he gave a bit of an “it depends” answer. He actually said that VDI is perhaps the only place where having a siloed approach makes sense. He had a couple of observations:

  1. CAPEX is a major consideration with VDI. You get CAPEX savings by bundling your VDI with your hypervisor.
  2. You may decouple VDI from hypervisor because in many/most shops the desktop guys are not the infrastructure guys. So, the infrastructure commitment to the VDI management is generally limited. Let the infrastructure guys choose the hypervisor; let the desktop guys choose the VDI.

I like that last observation. It may be just because choosing a VDI solution is a complex process and I’d be happy to pass the job off to our user team. My first order of business when I get back to work next week.

I went to the the session, Securing the Virtual Environment: How to Defend the Enterprise. What I learned? Our security guys need to understand virtualization. Today over 60% of servers are virtualized. There are special considerations that apply to virtualized infrastructure. The old rules and methods may not apply. It will become unacceptable to halt forward progress because the security guys don’t know current technology. Our security guys should get this book, Securing the Virtual Environment: How to Defend the Enterprise Against Attack.

August 28, 2012

VMworld 2012 Takeaways, Day 2

Filed under: Technology,VMworld 2012 — kendall @ 2:23 pm
The day started with a less dramatic keynote than Monday. Steve Herrod, VMware CTO, gave an update on end-user computing. I like the prospects of Operation Horizon–the promise of delivering the user a single place to go for all their apps, desktops, and data. Last year Steve demoed mobile OS virtualization on an Android device. That was pretty slick, but it was for Android only. This year he demoed some interesting developments in the iOS space. While the Android solution is still more impressive and elegant, at least VMware was able to demonstrate a commitment to developing for iOS, since that is what most customers want. The biggest eyebrow-raiser for me is the integration of XenApp into Horizon. VMware is going head on at Citrix in the desktop virtualization space, and other fronts: cloud file storage, user portal, self-provisioning… It is interesting that they seem to be throwing in the towel on application virtualization. Not sure if they are running into too much resistance on the electronic medical records front, where hospitals seem to be largely committed to XenApp or what. Or, if they are having trouble innovating with respect to ThinApp. In any case, it is an interesting development. Personally, I’m not sure I want to support infrastructure for both View and Xen if I don’t have to.

Takeaways from session vCAT 3.0: Architecture to Implementation in 5 Easy Steps:
First of all, total false advertising. I was intrigued by the notion that there were five easy steps to architecting anything, much less VMware. As I suspected it wasn’t easy. It turns out there are five steps, and these are essentially the same as any project management plan–five very involved steps that are probably easy enough for someone used to a complex project management process. But anyhow, there are some important take aways.

  1. vCat 3.0 is the latest release of a toolkit for VMware architects to do their own designs… methodologies and considerations for building and deploying VMware technologies.
  2. Use the VMware implementation examples docs; these provide real examples of VMware designs.
  3. Get vCat.

Now for the arguably geekiest session yet, Virtualizing Oracle for Disaster Recovery with Data Guard and VMware vCenter Site Recovery Manager. Takeaways:

  1. Use SRM to failover web and app vms, but use DataGuard to failover the databases to the secondary servers. Okay, but are you going to explain why? I think what they are saying is that it is preferred to use DataGuard for database failover rather than SRM as DataGuard is designed specifically to avoid database corruption whereas SRM is not. Good point. Touché. King’s to you.
  2. Don’t use VMware vShpere Replication as it does not currently deliver synchronous replication. In fact, it does not guarantee less than a 15 minute RPO. So, some data loss is virtually guaranteed.

How about benefits of virtualizing Oracle?

  1. Performance is not an issue.
  2. Server consolidation
  3. Simplify deployment
  4. More…

Daily Fails. So today had some fails that warrant mentioning:

  1. Yesterday when Steve Herrod mentioned that they would be live polling the audience during Tuesday’s keynote I was doubtful that it would work. If you’ve ever been to a large concert or sporting event you know how difficult it can be to make a call, send a text, or check-in on Facebook. Now, VMware is not a wireless networking provider, but they should know if you get 20,000 geeks together in one room you are going to have 40,000 wireless devices. Consequently, just getting on wifi was tricky. Navigating to the polling website was darned near impossible. Deep in the bowels of the Moscone, AT&T’s 3G suffered from the same problem. I was able only to respond to two of the polls and that with much effort. All of that work that your device does to try to connect, try to send, resending… It kills your battery while you get next to nothing done. During the keynote, I depleted half my battery. I don’t know who is providing this conference’s wifi, but i suspect they aren’t using the latest advances in this space. There are a number of companies innovating in this wireless space–large scale extremely high user density. What I’m talking about is using many radios, an array of multiple uni-directional antennas, and pairing devices to the right radio, frequency, and data rate, based on the profile of the device. I’m thinking of vendors like Xirrus (disclosure: I’ve gotten nothing from Xirrus for this mention, though if they wanted to give me something, I’d take it). Ultimately, this lousy ability to respond to the polls probably doesn’t matter. If you have a basic knowledge of statistics, you understand that you only need a relatively small sample size to have a reasonably accurate poll–not good enough for voting, when every vote should count–but good enough for a poll. So, my complaint isn’t about the validity of the poll, but about the user experience. The irony was not lost on me, that the audience participation aspect of this keynote about end-user computing/customer experience pretty well stunk.
  2. Yesterday Steve promised that all five of VMworld’s platinum sponsors would be delivering four minute live demos. And that we would be able to vote for the best in an American-Idol-America-chooses fashion, hence the need for live polling. NetApp ultimately won the competition, but I dispute that outcome. As I’ve already discussed I don’t dispute the validity of the poll. My beef with respect to the outcome is not in regard to voter fraud, or some such thing. NetApp didn’t satisfy the criteria of the competition. Not only were the polls supposed to be live, but also the demos. Of the five competitors only EMC provided a live demo. The others used video demonstrations. HP was the sneakiest. Their presenter was kind of miming the mouse movements during the video. Not sure if it was intentional. Seemed that way at first, but later it was obvious he wasn’t doing it live. In the case of Dell, the mouse movements and screen refreshes were faster than humanly possible. Cisco didn’t even do a demo, but a song (lip sync’ed) and essentially a PowerPoint. When you advertise a live demo, deliver a live demo, not a sales presentation. So, my kudos go out to EMC who actually did a demo–of a file recovery–in nearly the allotted time. (disclosure: the University of Oklahoma is an EMC customer, but we get storage from a lot of different vendors; EMC gave me nothing for this mention).

August 27, 2012

VMworld 2012 Takeaways, Day 1

Filed under: Technology,VMworld 2012 — kendall @ 12:31 pm
VMworld got off to a dramatic start this morning with a drums and dancing spectacle and a passing of the baton from outgoing CEO Paul Maritz to former EMC COO Pat Gelsinger. Can’t say I have no concerns about continuity of vision and momentum. I’m also wondering what is behind the leadership change up after only four years at the helm. Word is Paul will still be involved as a board member and still walking the halls of VMware.

There were a few other notable announcements like dropping the vRAM-based pricing model and announcing a per CPU pricing model with no core count, memory, or VM count restrictions. Other interesting announcements were a focus on end-user computing and management tools. I wonder what implications this has for “coopetition” between VMware and Citrix and between VMware and any number of management tools providers currently on display in the Solutions Exchange.

Anyhow, how about some takeaways from my first session, Architecting Auto Deploy for Availability and Scalability:

  1. Deploy your auto deploy servers as VMs. Build a small management ESX cluster that does not use auto deploy for your auto deploy servers and other management servers, like vCenter.
  2. Deploy one auto deploy server per 40 hosts.
  3. Don’t install auto deploy on your vCenter server, it uses a lot of CPU. It is not multi-threaded, so it doesn’t benefit from many cores, but it will benefit from more cycles. Best practice is a 2 vCPU VM.

A tip from my second session, esxtop for Advanced Users: Set the power regulation setting in the BIOS to OS control to get more useful data out of esxtop. Otherwise the OS, ESXi in this case, will not be able to differentiate CPU/core activity and will only display total utilization. At least that’s what think he was saying. By-the-way, there sure seem to be a lot of people in this session… And all glassy eyed. Not a good choice for after lunch. They are probably all trying to make people think they are total wizards by sitting in on such a geeky session. That’s why I’m here. I’m probably going to sound like a geek here, but this session and a later session, Become a Rock Star with PowerCLI and vCenter Orchestrator nearly made me want to take some time away from my conference agenda to get on console and mess around with esxtop and PowerCLI. I have three back to back parties to go to and I’m wondering when I can remote into a server. What is wrong with me? It will have to wait. I’m sure I’ll be in no condition for command lining by the time I get back to my hotel room.

Daily Fails: While VMware deserves a lot of commendation for pulling off such a massive conference, with a lot of complexity, and a lot of very impressive technology–and this year with a record 20,000+ attendees–I gotta report on a couple of fails from the conference so far:

  1. The Hands-on Labs crapped out on Sunday. I was excited by the prospect of getting some labs knocked out before the conference started. The labs were scheduled to open at 11 AM on Sunday. At 10:30 AM the queue was snaking all over the lobby of Moscone West. Some nerds got in line first chance they could like it was some kind of Star Wars premier. That didn’t work out too well, as by mid-afternoon technical difficulties were creating an estimated three-hour wait. I went back that evening after the welcome reception. While the beers in my system weren’t cooperating too well with my learning objectives, there was no wait.
  2. During my first session, I meant to log into the VMworld mobile app to check my schedule and was greeted by a message saying the server was not available due to maintenance. It would be bad for any company for their website to go down precisely when the users need it, but this is particularly bad for a company whose whole business is about delivering daylight hours maintenance without interrupting the service and dynamically scaling out capacity on demand, and which advocates going all-in on a cloud paradigm. Well, the cloud failed me. Fortunately, the service was back up in time to get me to my next session on time, but my trust in the cloud was shaken if ever-so slightly.
  3. This has to be said… The alumni lounge this year is fantastic… By comparison with a partitioned off little area in the basement like it was last year at VMworld 2011 it is a serious upgrade. It’s at Jillian’s a billiards hall and sports bar on the corner of the Metreon, so it is super-conveniently located–right in the middle of all the Moscone venues. Soft drinks are free as is the adequate selection of snackables. Seating is adequate and the booths and sofas particularly comfortable. They could use use some help with their wifi config. They’ve got three different public wifi networks available–all with no authentication which is preferred. Can’t they find someone in San Francisco who could help them clean that up though? Here’s the real problem… Where is the friggin’ coffee?!

April 5, 2012

How to Always Keep a Program Running in Mac OSX

Filed under: Technology — kendall @ 8:44 pm

With IOS 5, you can now automatically sync your iPhone over the air, if your iPhone is plugged into a charging device and is on the same wifi network as the computer with your iTunes library.  I really like this feature.  I listen to a lot of podcasts, some of which come out daily.  So, with over-the-air syncing, now I don’t have to plug my iPhone into my computer every day to make sure I’ve got my latest podcasts.  I shouldn’t even need to touch that computer, but when I wake up in the morning and unplug my iPhone from its charger on my nightstand I am ready to go and can listen to that first podcast while I’m getting showered and dressed.  That is unless iTunes is not running.  Many times I have gotten up and a new podcast was not waiting for me.  And it is universally because iTunes was closed.  Now I need to open iTunes and wait for it to check for new podcasts and download then and then sync my iPhone.  Many wasted minutes and my day is off to a poor start.  So, how can I make sure that iTunes stays open?

I got this procedure from this MacWorld forum post.

If you have iTunes set to open on login, deselect that.  This will take care of it.

Create the following plist file in the LaunchAgents folder of your home Library (~/Library/LaunchAgents) with the name user.launchkeep.itunes.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Label</key>
  <string>user.launchkeep.itunes</string>
  <key>KeepAlive</key>
  <true/>
  <key>Program</key>
  <string>/Applications/iTunes.app/Contents/MacOS/iTunes</string>
</dict>
</plist>

Load this launchd job by running the following command in Terminal:

launchctl load ~/Library/LaunchAgents/user.launchkeep.itunes.plist

Now if you quit iTunes it will start right back up.  In order to quit the app at any time other than logout or shutdown you’ll need to disable the job.  To do that run the following comman in Terminal:

launchctl remove user.launchkeep.itunes

November 18, 2011

Some Unsolicited Advice about Posting a lot of Pictures on Facebook

Filed under: Uncategorized — kendall @ 12:35 pm

Recently while checking Facebook, I saw that a friend had added 200 new photos to a gallery. I had to say something. So I messaged him with some unsolicited advice. I realized that there are others out there who need to hear this message so I am republishing it here as an open letter:

Dear Friend,

I wouldn’t bother saying this unless I was trying to be a friend, and not just a Facebook friend, but a REAL freind… And I don’t know how to say it other than just saying it… you need to be selective with the pictures you post. I really want to look at the pictures of your travels, but c’mon, 200 pictures!  When I see 200+ pictures in this gallery, I physically groan and either unhappily slog through the gallery or don’t bother at all–depending on my mood and general busyness. Because I like you. And like your family. And I want to keep up with what you are doing. But seriously, you should stop posting every picture you take.

Here are some tips you didn’t ask for:

  • Professional photographers take a lot of pictures and then choose only the best for publication. Take a cue from that. Pick a few that are truly representative of your trip. You don’t have to capture every moment, place or event. Select pictures that capture the overall feelings, impressions, beauty… from your trip. I use a max 10% rule. Only post 1 in 10. You will have to make tough choices. It is OK.
  • Do a little post processing. It isn’t so intimidating when you only have one or two dozen photos to work on. In iPhoto there is an auto-adjust that is very often all that is needed to improve the color and exposure on an ordinary picture to make it a lot better. To make it come alive.  This is important.  One great picture will leave a stronger impression on the viewer than 200 “meh” pictures.
  • If you are just using fb as a way to preserve your photos in the event of a disaster there are better ways to do it… Carbonite, Flickr, Phanfare, SmugMug…
  • If you have friends or family who really want to see every single picture from your trip, Phanfare or SmugMug are good options. You can create private galleries and send invites to those people, or you can simply include a “to see more follow this link” in the description of your galleries on Facebook. Then everyone who wants to see some highlights from your trip can see that and if they want to see more, they can go to the SmugMug or Phanfare gallery.

Of course you can just say “I don’t give a rat’s a$$ if Kendall looks at my galleries or not. I’m gonna just keep doing it this way.” That would be fine and I won’t think any differently of you. Just trying to be helpful.

KG

Next Page »

Powered by WordPress