Fedora 7 on Fusion

Tonight I decided I was going to install the new distro of the Fedora Project (7). I spent the even downloading the entire 2Gb distribution. After the initial excitement I felt from it’s download completion I started up Parallels, setup the virtual machine – press start… Fedora booted from the CD, started the installation, I selected my language… and then… can’t find media driver? What the hell? How are you reading the data right now? After some web searching I found this to be a common issue in Parallels. My option? Download the distro from repository over HTTP or FTP! Yeah screw that! I just spent the night downloading it – I’m not doing it again.

So what to do…?

Remembering what I heard about VMware Fusion, I downloaded the beta. Installed the product. Set up the virtual machine. And off it goes. I didn’t even have to select my language. Fusion seems far more friendly to Linux distros in addition to Windows. So far the installation of Fedora is running completely smooth I’m about 30% done of the package installation and I’m looking forward to my first run at Fedora.

Reading today of the announcement of Parallels 3.0 – I am curious to learn what there “Linux tools” add-on is in the product. I guess time will tell.

Understanding “Reverse CAPTCHAing”

I don’t know about you but to me I think the idea of CAPTCHA usage is a little backwards. We are asking humans to prove that they are human, how dumb. Plus not only that, but we are more then insisting that low/poor vision users don’t use our applications. Why not start thinking in reverse a little bit? Let’s make a form that makes the individual prove they aren’t a machine, not that they are a human.Confused? It’s simple – CAPTCHAs exist to prove that whomever or whatever is filling out the form is a human by reading some scrambled mess that a computer could not translate. However, machines behave logically – so we can easily test that a form is being filled out by something that is not a machine without making the user do something annoying like trying to recognize squiggly versions of alpha characters without having a brain aneurysm.

So how do we do this? Well simple… we add an input field to our form and then using a little CSS, hide it (style = “display:none;”). For the field label you could have “Please leave blank” – then for the input id = call it something typical, like city, name, email or whatever. But something that you aren’t using elsewhere on the form. So to review, you should have something like this:

<div style="display:none;">
  <label for="email_address">Please leave blank:</label>
  <input type="text" name="email_address" id="email_address" />
</div>

Now on the server processing page, check to see if this field is empty – if it isn’t, provide a friendly error that the form should be filled out manually.

So do you see how this works? Spiders and machines fill in fields they seen in a form – they add text to this field, and you simply ignore the interaction.

Before you complain about it, let me say it first. No. This technique is not applicable to all sites, such as news portals and such. But this will definitely help you reduce and spider related spam on those contact forms.

Using Ruby to Stay Informed With Innovative Thought

So I was thinking this evening about creating an RSS parser in Ruby. You know… Ruby supports this built in? Big surprise right? All you have to do in require the rss library:

require 'rss'

Then of course if you want to open up a connection to a URL you need to include the open uri library too:

require 'open-uri'

So that’s all the requirements. Next I’ll create a method called ReadRss that will take a single variable defined as “url”.

def ReadRss(url)
  open(url) do |page|
    respond = page.read
    result = RSS::Parser.parse(respond,false)
    puts "Blog: #{result.channel.title}, #{result.channel.description}"
    result.items.each_with_index do |item, i|
      i += 1
      puts "#{i}  #{item.title}"
    end
  end
end

That’s it. Now all you have to do is call ReadRss with the site feed address. Here’s a good hint for you:

ReadRss("https://innovativethought.wordpress.com/feed/")

So now that you can parse RSS feeds right from your Ruby script.  ATOM parser will come shortly.

Resetting Your Forgotten MySQL Password

A few weeks ago I ran into an issue where I found myself locked out of the MySQL server that runs locally on my Mac.  After some research and toying around I was able to reset the password.  So for those of you unaware of how this process works, I’m going to share it with you.  Now remember, this is done from my Mac, many of you that are running from an another system might find the solution to not be accurate for you.

Also, please be advised.  Use this method at your own risk.  You are a responsible human, if you don’t feel comfortable doing this procedure find someone to help you – I am not responsible for any lost of data or corruption on your system.

Stopping MySQL
First stop the service.  You can either do this using the preference pane if you have that installed if you don’t, your likely well aware of doing it from the terminal. Though this should work for most users.

sudo /usr/local/mysql/support-files/mysql.server stop

You can restart using:

sudo /usr/local/mysql/support-files/mysql.server start

Skipping Access Tables
Alright – so open up a Terminal window and execute:

/usr/local/mysql/bin/safe_mysqld --skip-grant-tables

For MySQL 5 Installations do – (thanks to RY for pointing it out):
/usr/local/mysql/bin/mysqld_safe --skip-grant-tables

Running the Reset
Ok – so you have safe_mysqld running in one Terminal window, now open up another one and execute “/usr/local/mysql/bin/mysql mysql” (no quotes).  If you aren’t familiar you are opening up the MySQL console and opening the mysql table.

Write the reset query into the console as follows:

UPDATE user SET Password=PASSWORD('YOUR_PASSWORD')
WHERE Host='localhost' AND User='root';

Replacing “YOUR_PASSWORD” with your desired password of course.  Once you’ve done that just exit the console “exit;” close the safe_mysqld execution and restart your MySQL server in normal mode.

SEO Through Blog Feeds? Oh God Please…

So during the Web 2.0 conference I was exposed to continuing babel on the idea that feed syndication and blog marketing in general is yet the next one trick pony to fill all the needs a corporation might have due to unfocused marketing dollars. Here’s the general mistake with this logic… You see, yes blogs are syndicated but they are only “favored” and read often when they actually reach a level of communication the reader wishes to participate in. Blogs don’t become popular because the keywords that are included within them are presented in mass. They become popular because they represent a level of expertise and communicate clearly with skill. Even if you view briefly the blog of Robert Scoble, Technical Evangelist for Microsoft (and commonly quoted in a sick attempt to further poison the marketing industry into thinking RSS will save their jobs) you notice that rarely is a Microsoft product actually discussed on his blog. Rather he widely discusses the process and features of other companies and services. Why? It’s simple – to show expertise. To show that he isn’t trying to just sell a topic of focus.

It’s important that we look at all phases and features that come from the Internet as basic forms of human interaction. Think about it. I once bought a car from a local dealership in town… and throughout the life of that car, on holidays, my birthday, and the various days that ended with “y” I would receive a “Hey! Enjoying your car? Come buy another one!” It was extremely annoying, and when thinking about buying a new car I went elsewhere. Now, a few years ago I bought a house. After I bought the house I received friendly reminders from my Real Estate Agent on tax benefits, important maintenance tips for the house and all sorts of helpful things that didn’t push me to buy something directly, only a gentle mention through the sticker on the envelope of “We Love Referrals”. So what did this do? Must I really explain?

Think about it. When my Real Estate Agent would send me information it became almost a subconscious thought that this person cared about my wellbeing, they were being informative and helpful. A level of trust was built. When you have brand identity and brand trust, you have customer loyalty. I still to this day refer her to everyone I know, and several members of my family have bought property with her.

Now looking at customer communication in such a grassroots way we see that marketing is still about people, and that people aren’t stupid. If you treat something like a billboard, they are going to notice it as such. Just as people use banner blocking software and anti-spam software. It’s only going to be a matter of time until there is a browser toolbar with a “blog quality” meter to inform the visitor of your intent to be informative or snakeoil them.

Now, I’m not referring to corporate blogging in general here – I think corporate blogging can be a great thing.  Microsoft’s Channel9 is a great place for developers as well as Microsoft’s IEBlog kept everyone up-to-date during the development of IE7.  I’m talking about blogging and mass syndication as a way to continue the act of just spreading keywords throughout the net because they now are in your blog and able to be syndicated to the masses and show up on various pages everywhere.

It’s time to wake up people! You want to get popular? Be good at what you do, help the community at large and your level of expertise will be recognized and appreciated and thus your blog will continue to move upwards in the ranks.

But hey, this is just how I see it, I don’t have an MBA from a fancy college, nor do I have the word “Marketing” in my title. I am simply human, and thus a consumer.

Geocoding in Microformats with Google

So I finished a few TextMate snippets last night to help me in the production of microformats for my client’s websites. A few little tweaks left, but all and all I’m happy with them.

Today I decided to try something new and to add some geocoding data to a new site’s “Contact Us” page address hCard. For those of you that haven’t yet really experimented with using geocoding within a microformat it is actually done quite easily. All you do is add the following code to your hCard:

<div class="geo">
	<span class="latitude">[lat number]</span>
	<span class="longitude">[long number]</span>
</div>

So, how do you get lat/long codes? Simple – you use Google Maps… Just go to http://maps.google.com and look up the location that you want to geocode. It’s good to give the found location a double-click in the map just to make sure that it is the center focused it. Then click on the “Link to this Page” button. Look up in the URL and you’ll see a ton of various query strings. Look for the ll= NOT the sll= or anything like that… it will say &ll= with a number, like so:

&ll=38.898114,-77.037163

Just after the last number will be another “&” to start the next query option, make sure you leave that last “&” out. That’s it – the first number in that string is your latitude and the second number (after the comma) in the longitude.

Happy Geocoding!

Web 2.0: Day 2 Recap – Sessions

Of course I am recapping here and reviewing most of my notes now that I am home. I still wanted to share what I had experiences so I’m posting this information a little late. Day 3 and 4 will be coming as well.

The New Hybrid Designer

This was a panel discussion that included Kelly Goto, Jeremy Keith, and Chris Messina. Unfortunately it become more of an introduction to the Design related track that really getting down the what it means to be a Hybrid Designer. Getting the designers to learn more about application design and architecture are some of the most important key points here. Using documentation such as that from Apple, their Application Design Guidelines is a great suggestion. Remembering as well that the line between design and development continues to grow thinner. Continuing to place strong consideration on “placelessness” – the idea that not only should content be separated from design but as well as context and device limitations. Chris Messina also made strong mention against applications such as Adobe’s Apollo which will end the “View Source” option, noting that many of todays developers have learned using the method of learning from someone else’s work. I was differently that person and I’m sure many of today’s beginners learned HTML are doing the same. It is important we don’t kill the growth of our community by developing applications that eradicate it’s growth.

Rich Internet Applications with Apollo

Sadly, the presentation with Mike Chambers as he tried to show the benefits of Apollo left me desiring more in general. I can’t blame Mike for it completely because the network was extremely congested and he was unable to demo many of the features of online application access. The thing that really has got me bothered by the platform in general is that, in a bad way, it feels like “half a product”. Now I’m a strong advocate of building “half a product” more then a “half ass product”. Perhaps I would lean to being more enthusiastic about this product if I felt the features planned for inclusion in their initial release was the “correct half” of the product.

If you are wanting to streamline application development to “bridge the gap” between the web and desktop platforms you need to create a way to easy deploy the single page/controller level updates to all the desktop clients. Streamlined, without interruption – with no option to not update the functionality. It would be a replica of the features you are mimicking from the web application you are converting. Not necessarily in user interface, but function and user experience.

Vulnerabilities 2.0 in Web 2.0: Next Generation Web Apps from a Hacker’s Perspective

This was an amazing conference session. Given by a partner of iSEC Partners a security research firm and pen-testing company. I’m hoping to get a copy of the slides as the presenter did tell us that they would be available. Getting into topics that were far more advanced then just simple cross-site scripting issues. Major vulnerabilities exist in all current AJAX framework implementations as well a big issue with most AJAX sites is that the functions and methods are rightly available to all visitors to the application. Having methods within your code for “MakeMeAdmin()” is ridiculous! But it still happens. Remembering as well using cross site forgery techniques are assisted because browsers will pass the cookie if it is active in the other window or tab – because cookies are shared among windows. It turns out the guys over at iSEC Partners are going to be publishing the new Hacking Exposed book in December 2007 entitled ‘Hacking Exposed: Web 2.0’.

The Arrival of Web 2.0: The State of the Union on Browser Technology

I’ll be honest and say I don’t know how much really came out of this session other then, “Browser companies are starting to work today.” People representing Opera, Mozilla, and IE were on the panel. Other then continuing to hear that Firefox 3 will offer local store so you can natively develop offline applications and that the Mozilla foundation is working on issues that exist in JavaScript as it is currently being implemented using Ajax (the previous session was of course stuck in my head at the time). That was about it on that one.

Cleaning up my Ruby Fizzbuzz

As I become more familiar with Ruby and Rails I’m of course going to start to understand better ways to do a snippet of code. Here is an updated script that is a little leaner:

(1..100).each do |i|
  fb = []
  fb << "Fizz" if (i % 3) == 0
  fb << "Buzz" if (i % 5) == 0
  fb << i if (i % 3) != 0 and (i % 5) != 0
  puts (fb.join "")
end

I am still tring to review my notes, so I just ask that those of you awaiting my review of the Web 2.0 Expo please continue to be patient.