Understanding “Reverse CAPTCHAing”

I don’t know about you but to me I think the idea of CAPTCHA usage is a little backwards. We are asking humans to prove that they are human, how dumb. Plus not only that, but we are more then insisting that low/poor vision users don’t use our applications. Why not start thinking in reverse a little bit? Let’s make a form that makes the individual prove they aren’t a machine, not that they are a human.Confused? It’s simple – CAPTCHAs exist to prove that whomever or whatever is filling out the form is a human by reading some scrambled mess that a computer could not translate. However, machines behave logically – so we can easily test that a form is being filled out by something that is not a machine without making the user do something annoying like trying to recognize squiggly versions of alpha characters without having a brain aneurysm.

So how do we do this? Well simple… we add an input field to our form and then using a little CSS, hide it (style = “display:none;”). For the field label you could have “Please leave blank” – then for the input id = call it something typical, like city, name, email or whatever. But something that you aren’t using elsewhere on the form. So to review, you should have something like this:

<div style="display:none;">
  <label for="email_address">Please leave blank:</label>
  <input type="text" name="email_address" id="email_address" />
</div>

Now on the server processing page, check to see if this field is empty – if it isn’t, provide a friendly error that the form should be filled out manually.

So do you see how this works? Spiders and machines fill in fields they seen in a form – they add text to this field, and you simply ignore the interaction.

Before you complain about it, let me say it first. No. This technique is not applicable to all sites, such as news portals and such. But this will definitely help you reduce and spider related spam on those contact forms.

Advertisement

Using Ruby to Stay Informed With Innovative Thought

So I was thinking this evening about creating an RSS parser in Ruby. You know… Ruby supports this built in? Big surprise right? All you have to do in require the rss library:

require 'rss'

Then of course if you want to open up a connection to a URL you need to include the open uri library too:

require 'open-uri'

So that’s all the requirements. Next I’ll create a method called ReadRss that will take a single variable defined as “url”.

def ReadRss(url)
  open(url) do |page|
    respond = page.read
    result = RSS::Parser.parse(respond,false)
    puts "Blog: #{result.channel.title}, #{result.channel.description}"
    result.items.each_with_index do |item, i|
      i += 1
      puts "#{i}  #{item.title}"
    end
  end
end

That’s it. Now all you have to do is call ReadRss with the site feed address. Here’s a good hint for you:

ReadRss("https://innovativethought.wordpress.com/feed/")

So now that you can parse RSS feeds right from your Ruby script.  ATOM parser will come shortly.

Resetting Your Forgotten MySQL Password

A few weeks ago I ran into an issue where I found myself locked out of the MySQL server that runs locally on my Mac.  After some research and toying around I was able to reset the password.  So for those of you unaware of how this process works, I’m going to share it with you.  Now remember, this is done from my Mac, many of you that are running from an another system might find the solution to not be accurate for you.

Also, please be advised.  Use this method at your own risk.  You are a responsible human, if you don’t feel comfortable doing this procedure find someone to help you – I am not responsible for any lost of data or corruption on your system.

Stopping MySQL
First stop the service.  You can either do this using the preference pane if you have that installed if you don’t, your likely well aware of doing it from the terminal. Though this should work for most users.

sudo /usr/local/mysql/support-files/mysql.server stop

You can restart using:

sudo /usr/local/mysql/support-files/mysql.server start

Skipping Access Tables
Alright – so open up a Terminal window and execute:

/usr/local/mysql/bin/safe_mysqld --skip-grant-tables

For MySQL 5 Installations do – (thanks to RY for pointing it out):
/usr/local/mysql/bin/mysqld_safe --skip-grant-tables

Running the Reset
Ok – so you have safe_mysqld running in one Terminal window, now open up another one and execute “/usr/local/mysql/bin/mysql mysql” (no quotes).  If you aren’t familiar you are opening up the MySQL console and opening the mysql table.

Write the reset query into the console as follows:

UPDATE user SET Password=PASSWORD('YOUR_PASSWORD')
WHERE Host='localhost' AND User='root';

Replacing “YOUR_PASSWORD” with your desired password of course.  Once you’ve done that just exit the console “exit;” close the safe_mysqld execution and restart your MySQL server in normal mode.

SEO Through Blog Feeds? Oh God Please…

So during the Web 2.0 conference I was exposed to continuing babel on the idea that feed syndication and blog marketing in general is yet the next one trick pony to fill all the needs a corporation might have due to unfocused marketing dollars. Here’s the general mistake with this logic… You see, yes blogs are syndicated but they are only “favored” and read often when they actually reach a level of communication the reader wishes to participate in. Blogs don’t become popular because the keywords that are included within them are presented in mass. They become popular because they represent a level of expertise and communicate clearly with skill. Even if you view briefly the blog of Robert Scoble, Technical Evangelist for Microsoft (and commonly quoted in a sick attempt to further poison the marketing industry into thinking RSS will save their jobs) you notice that rarely is a Microsoft product actually discussed on his blog. Rather he widely discusses the process and features of other companies and services. Why? It’s simple – to show expertise. To show that he isn’t trying to just sell a topic of focus.

It’s important that we look at all phases and features that come from the Internet as basic forms of human interaction. Think about it. I once bought a car from a local dealership in town… and throughout the life of that car, on holidays, my birthday, and the various days that ended with “y” I would receive a “Hey! Enjoying your car? Come buy another one!” It was extremely annoying, and when thinking about buying a new car I went elsewhere. Now, a few years ago I bought a house. After I bought the house I received friendly reminders from my Real Estate Agent on tax benefits, important maintenance tips for the house and all sorts of helpful things that didn’t push me to buy something directly, only a gentle mention through the sticker on the envelope of “We Love Referrals”. So what did this do? Must I really explain?

Think about it. When my Real Estate Agent would send me information it became almost a subconscious thought that this person cared about my wellbeing, they were being informative and helpful. A level of trust was built. When you have brand identity and brand trust, you have customer loyalty. I still to this day refer her to everyone I know, and several members of my family have bought property with her.

Now looking at customer communication in such a grassroots way we see that marketing is still about people, and that people aren’t stupid. If you treat something like a billboard, they are going to notice it as such. Just as people use banner blocking software and anti-spam software. It’s only going to be a matter of time until there is a browser toolbar with a “blog quality” meter to inform the visitor of your intent to be informative or snakeoil them.

Now, I’m not referring to corporate blogging in general here – I think corporate blogging can be a great thing.  Microsoft’s Channel9 is a great place for developers as well as Microsoft’s IEBlog kept everyone up-to-date during the development of IE7.  I’m talking about blogging and mass syndication as a way to continue the act of just spreading keywords throughout the net because they now are in your blog and able to be syndicated to the masses and show up on various pages everywhere.

It’s time to wake up people! You want to get popular? Be good at what you do, help the community at large and your level of expertise will be recognized and appreciated and thus your blog will continue to move upwards in the ranks.

But hey, this is just how I see it, I don’t have an MBA from a fancy college, nor do I have the word “Marketing” in my title. I am simply human, and thus a consumer.

Geocoding in Microformats with Google

So I finished a few TextMate snippets last night to help me in the production of microformats for my client’s websites. A few little tweaks left, but all and all I’m happy with them.

Today I decided to try something new and to add some geocoding data to a new site’s “Contact Us” page address hCard. For those of you that haven’t yet really experimented with using geocoding within a microformat it is actually done quite easily. All you do is add the following code to your hCard:

<div class="geo">
	<span class="latitude">[lat number]</span>
	<span class="longitude">[long number]</span>
</div>

So, how do you get lat/long codes? Simple – you use Google Maps… Just go to http://maps.google.com and look up the location that you want to geocode. It’s good to give the found location a double-click in the map just to make sure that it is the center focused it. Then click on the “Link to this Page” button. Look up in the URL and you’ll see a ton of various query strings. Look for the ll= NOT the sll= or anything like that… it will say &ll= with a number, like so:

&ll=38.898114,-77.037163

Just after the last number will be another “&” to start the next query option, make sure you leave that last “&” out. That’s it – the first number in that string is your latitude and the second number (after the comma) in the longitude.

Happy Geocoding!