Friday, April 27, 2012

Automatically Creating Google Homework Sites

We had an interesting Google Sites idea for teaching get floated yesterday....

Q. Could you automatically create a Google Site for a list of students that only they and their tutor can see? Also, at a given time could the students' permission be changed from "writer" to "reader" when the deadline had arrived? It'd be good if you could use a Template Site so that the site could be set up with the right pages and prompts to begin with.


With the GData API it is possible. Here's an example python script that shows how... First, load the libraries. You may need to download these if you don't have them already.


import atom.data
import gdata.sites.client
import gdata.sites.data
from time import sleep


I then created lots of stub functions in the hope that I could better understand how the API works. The problem ( for me ) here was that ...

a. This code uses a now old fashioned means of connecting to the API. If this was going to be "real code" we would have to use OAuth to connect to the API.
b. The API documentation is hopeless
c. It seems to work around the idea of (Atom) feeds with have <entry> objects, which have <other objects> in them, which you get to via various URLs, which just makes the whole process more cumbersome... anyway...

The first thing you do is connect to the API


client = gdata.sites.client.SitesClient(source='University of York', site='UoY') # Can be anything, I think
client.domain = "york.ac.uk" #Not needed for non Apps domains
client.ClientLogin('****@york.ac.uk', '*********', client.source)


... then I wanted to see a list of all my sites and also show the URLs I'd need to use for each.



def list_sites( ):
'''
Shows a list of URLs you will need
'''
feed = client.GetSiteFeed()
for entry in feed.entry:
print '%s (%s)' % (entry.title.text, entry.site_name.text)
if entry.summary.text:
print 'description: ' + entry.summary.text
print "\turl:", entry.find_self_link()
print "\tedit link", entry.find_edit_link()
print "\tacl_link:", entry.get_acl_link().href
print "\tclickable link:", entry.get_alternate_link().href
print "_" * 80

... this produces something like this...




Code Example: Timed Content Creation (code-experiments)
description: This site is an example that uses an App Script to grab some data from the web and save it in a Site's pages
url: https://sites.google.com/feeds/site/york.ac.uk/code-experiments
edit link https://sites.google.com/feeds/site/york.ac.uk/code-experiments
acl_link: https://sites.google.com/feeds/acl/site/york.ac.uk/code-experiments
clickable link: https://sites.google.com/a/york.ac.uk/code-experiments/
________________________________________________________________________________
Collaborative Tools Project (collaborative-tools-project)
url: https://sites.google.com/feeds/site/york.ac.uk/collaborative-tools-project
edit link https://sites.google.com/feeds/site/york.ac.uk/collaborative-tools-project
acl_link: https://sites.google.com/feeds/acl/site/york.ac.uk/collaborative-tools-project
clickable link: https://sites.google.com/a/york.ac.uk/collaborative-tools-project/
________________________________________________________________________________
Collaboratomatic (collaboratomatic)
url: https://sites.google.com/feeds/site/york.ac.uk/collaboratomatic
edit link https://sites.google.com/feeds/site/york.ac.uk/collaboratomatic
acl_link: https://sites.google.com/feeds/acl/site/york.ac.uk/collaboratomatic
clickable link: https://sites.google.com/a/york.ac.uk/collaboratomatic/
________________________________________________________________________________
Departmental 20:20s (departmental-20-20s-july-2011)
description: A place to collect resources from the Information Directorate 20:20 presentations
url: https://sites.google.com/feeds/site/york.ac.uk/departmental-20-20s-july-2011
edit link https://sites.google.com/feeds/site/york.ac.uk/departmental-20-20s-july-2011
acl_link: https://sites.google.com/feeds/acl/site/york.ac.uk/departmental-20-20s-july-2011
clickable link: https://sites.google.com/a/york.ac.uk/departmental-20-20s-july-2011/



With this list of URLs I can then either access a site for editing, or for changing its permissions ( or ACLs) like this....




def get_site(site_feed_url):
'returns a GDEntry site object that you can fiddle with'
site = client.GetEntry(site_feed_url)
print site.title.text
print "\tedit_link:", site.get_edit_link().href
print "\tacl_link:", site.find_acl_link()
return site



def get_site_by_name(name):
feed = client.GetSiteFeed()

for entry in feed.entry:
if entry.title.text == name:
print "url:", entry.find_self_link()
print "edit link", entry.find_edit_link()
print "acl_link:", entry.get_acl_link().href
return entry
break


>>> site = get_site_by_name("My Template Site")

url: https://sites.google.com/feeds/site/york.ac.uk/my-template-site
edit link https://sites.google.com/feeds/site/york.ac.uk/my-template-site
acl_link: https://sites.google.com/feeds/acl/site/york.ac.uk/my-template-site
<gdata.sites.data.SiteEntry object at 0x10143c1d0>

I can then nosey around in my site to see what it can do...


>>> dir( site )
['FindAclLink', 'FindAlternateLink', 'FindChildren', 'FindEditLink', 'FindEditMediaLink', 'FindExtensions', 'FindFeedLink', 'FindHtmlLink', 'FindLicenseLink', 'FindMediaLink', 'FindNextLink', 'FindPostLink', 'FindPreviousLink', 'FindSelfLink', 'FindSourceLink', 'FindUrl', 'GetAclLink', 'GetAlternateLink', 'GetAttributes', 'GetEditLink', 'GetEditMediaLink', 'GetElements', 'GetFeedLink', 'GetHtmlLink', 'GetId', 'GetLicenseLink', 'GetLink', 'GetNextLink', 'GetPostLink', 'GetPreviousLink', 'GetSelfLink', 'IsMedia', 'ToString', '_XmlElement__get_extension_attributes', '_XmlElement__get_extension_elements', '_XmlElement__set_extension_attributes', '_XmlElement__set_extension_elements', '__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_attach_members', '_become_child', '_get_namespace', '_get_rules', '_get_tag', '_harvest_tree', '_list_xml_members', '_members', '_other_attributes', '_other_elements', '_qname', '_rule_set', '_set_namespace', '_set_tag', '_to_tree', 'attributes', 'author', 'category', 'children', 'content', 'contributor', 'control', 'etag', 'extension_attributes', 'extension_elements', 'find_acl_link', 'find_alternate_link', 'find_edit_link', 'find_edit_media_link', 'find_feed_link', 'find_html_link', 'find_license_link', 'find_media_link', 'find_next_link', 'find_post_link', 'find_previous_link', 'find_self_link', 'find_source_link', 'find_url', 'get_acl_link', 'get_alternate_link', 'get_attributes', 'get_edit_link', 'get_edit_media_link', 'get_elements', 'get_feed_link', 'get_html_link', 'get_id', 'get_license_link', 'get_link', 'get_next_link', 'get_post_link', 'get_previous_link', 'get_self_link', 'id', 'is_media', 'link', 'namespace', 'published', 'rights', 'site_name', 'source', 'summary', 'tag', 'text', 'theme', 'title', 'to_string', 'updated']


From here I created a few more functions, mainly just trying to understand how the API works.

def create_site(title, description='', theme='slate', source_site_url=None):
if source_site_url:
#template_site = get_site(source_site_url)
#source_url = template_site.GetId()
print "copying:", source_site_url
site = client.CreateSite(title, description=description,theme=theme, source_site=source_site_url)
else:
#Create a blank site
site = client.CreateSite(title, description=description, theme=theme)
print "site created!", site.find_edit_link()
return site

def get_acls(site_acl_url):
'''
Get the site's permissions as a feed if given an acl_link, like this
'https://sites.google.com/feeds/acl/site/york.ac.uk/my-lovely-site'
'''
if "/acl/" not in site_acl_url:
print "That's probably not the right URL!"
feed = client.GetAclFeed(site_acl_url)
for entry in feed.entry:
try:
print '%s (%s) - %s' % (entry.scope.value, entry.scope.type, entry.role.value)
except Exception, err:
#print err
pass
return feed
def share_a_site_with(acl_url, email, role='writer'):
'''
Needs an acl_link to work
role can be reader, writer or owner
'''
#feed = client.GetAclFeed(acl_url)

scope = gdata.acl.data.AclScope(value=email, type='user')
role = gdata.acl.data.AclRole(value=role)
acl = gdata.acl.data.AclEntry(scope=scope, role=role)
#post(self, entry, uri, auth_token=None, converter=None, desired_class=None, **kwargs) method of gdata.sites.client.SitesClient instance
#fudgetastic!
acl_url = acl_url.replace('https://sites.google.com', '') #!!!! ??? 
acl_entry = client.Post(acl, acl_url)
print "%s %s added as a %s" % (acl_entry.scope.type, acl_entry.scope.value, acl_entry.role.value)

def list_a_sites_acls(site_url):
feed = client.GetAclFeed(site_url)
for entry in feed.entry:
  print '%s (%s) - %s' % (entry.scope.value, entry.scope.type, entry.role.value)
return feed

def list_all_copies(site_url):
''' 
Unfinished: You will need to be able to get all copies of a template site. You can iterate
through all sites and find the sites whose source is a another_site's URL.
'''
site = get_site(site_url)
#get all my sites
#for each, check site.FindSourceLink() to see if it is site_url
def remove_a_user(site_url, email):
"Shows how to remove a users access to a site"
feed = client.GetAclFeed(site_url)
for entry in feed.entry:
print entry.scope.value
if entry.scope.value == email:
print "DELETING ACCESS FOR: ", email
client.Delete(entry.GetEditLink().href, force=True)
break
print "done!"

And Finally....

I made a test function to get my Template Google Site and make copies ( each with their own name ) and allow "writer" access to them based on a list of emails.

def do_test():
template_site_url = 'https://sites.google.com/feeds/site/york.ac.uk/my-template-site'
#This might be read in from a spreadsheet.
students_emails = ['student1@york.ac.uk',
'student2@york.ac.uk', 
'student3@york.ac.uk',
'student4@york.ac.uk',]
for i,student_email in enumerate(students_emails):
#Create a copy of the template site with a new name
client = gdata.sites.client.SitesClient(source='University of York', site='UoY')
client.domain = "york.ac.uk"
client.ClientLogin('****@york.ac.uk', '********', client.source)
new_site_name = "Auntie Homework: %s" % student_email
print "creating:", new_site_name
site = create_site(new_site_name, source_site_url=template_site_url)
sleep(5)
#Get its edit_url
new_acl_url = site.get_acl_link().href
print "sharing:", new_acl_url
#Now add that user as a writer to the new site's permissions
share_a_site_with(new_acl_url, student_email, role="writer")
sleep(5)
print "Done creating", i+1, "Google Sites"


Conclusions

It worked! It made 4 new Google Sites based on my template site, and each was shared with one other person with "writer" access.

But there's still a heap of work to make this into something genuinely useful.

  • Because the client.ClientLogin() method, used to connect to API is deprecated, we will have to re-work this bit, and create an online user interface for it. I think greating a GUI in Google Spreadsheets might be the easiest but my JavaScript aint that hot.
  • Although each copied site keeps a track of the site it was copied from, there's no way of running this script for two separate homework assignments for separate lists of students AND have it keep track of which was in which cohort. 
  • I didn't look at "group permissions" which might be handy. When the homework deadline is passed a student group, rather the writer of the site, might be given read-only access. This would mean that, once finished everyone else can see everyone else's sites. 
  • I found that without the sleep() and re-initialising the client instance each time I wanted to create a new site, that the do_test() function threw a wobbly. 
  • A site can't be created if one with that name/URL already exists. I'd need to create a function to clear all copies of a template site.
  • Potentially, I think we may be able to either subscribe to the newly created sites, or do something with the newly created sites' content feed, so that as a tutor you'd be able to keep an eye on who was or wasn't completing their homework in one place. I didn't get that far though. At very least, the script might create a page on the Template Site that listed the copies that had been made.






















Wednesday, April 25, 2012

Application Building For The Rest Of Us?




I meet lots of people at the University who want technology to help them with their work. With some people it's often a case of introducing them to something we already offer and/or guiding them through doing things a new way.

But often what people really want is bespoke custom software development. They don't want an off-the-shelf technology, because in many cases, it just isn't a good enough fit. They don't want a wiki or a blog but instead they might need a simple web app that does a simple thing.

The Problem With Custom Development

I'm a big fan of the idea of everyone being able to make their own software, not because I want everyone to become geeks, but instead I want making software to become easy enough for everyone to be able to do it.

One of the problems with developments is knowing where to start. At the moment the University offers a MySQL, CGI and PHP service which allows you to work within certain parameters. With these tools you could, in theory create quite a complex web application. Hooking your application into the University's authentication system for log ins might be problematic, other data sources such as a list of students studying English, more so.

The main problem with this approach is that it is so damn hard. You can't get away from the fact that writing web applications is difficult, it requires a knowledge of web servers, databases, programming languages, unix and HTML. And that is just to do something simple, before you introduce collaborative coding, caching, task logs, error reporting and the like.

The problem here, as I see it, is that whilst writing software, at first can seem simple enough, it's a trap. Before you know it, you're having to learn about very geeky things like data modelling or caching models just to do even simple things.

But there can many other problems with custom development from the University perspective...

  • Keeping lots of small "solutions" working can be a nightmare as you upgrade part of the eco-system. It gets messy.
  • Who has the responsibility to maintain each piece of code? It gets messy.
  • Often, even well run projects can start simple and manageable and end in complexity and ultimate failure. It gets messy
  • Everyone has different ways of working with different tools, and different needs for collaboration and backups and resilience. It gets really messy.


The answer to all this custom development messiness, might be to devolve development, or at least some of it, to people who want it to happen, you.


There are lots of good reasons for taking a devolved development development approach ...

  • Messiness becomes localised. Rather than having to deal with the symptoms of messiness ( a locked down system, with a bottleneck in development etc ) you get to live with the mess you made. 
  • You get to develop what you want, your way.
There are of course downsides to this approach ...
  • The University could end with a devolved mess of processes and technology that NOBODY can understand. It could get messy.
So there is the problem. People often want really quick and simple custom development but the current approach to this is messy, and a solution to the messiness is a different kind of potential messiness. It's a messy never ending vortex that could do with tidying up a bit.


Is The Solution Google App Engine?

Google App Engine is an "in the cloud" web development and hosting system. It means that you can create a web application on your computer and then, when it's finished, run it and make it available to everyone at York.

It's not currently enabled for York at the moment as, because instead of paying a flat fee to have your application hosted, Google App Engine charges by bandwidth/ database usage/hits and so is currently disabled for york.ac.uk.

So why could Google App Engine (GAE) be useful at York? 

Well, in many ways GAE doesn't make the whole development process much easier. GAE isn't a point-and-click-to-create-an-application solution, you still need to know how to code ( in python or java) but it does make lots of the often forgotten parts of the development process simpler. 

For example,
  • The database GAE uses is more like a spreadsheet than MySQL. This is so that if your database needs 20 million entries - it will still work. Google take care of scaling for you. And charge.
  • If your web application gets very popular, with thousands of hits a second, Google will take of scaling for you. And charge.
  • GAE can use simple permissions to make your app only available to York staff. Smaller applications are free.
  • Lots of the complexities of creating a web application, like creating task queues, are already built in.
So, if you are already familiar with creating Django web application, then not much in GAE will be new.

But I Don't Want To Be A Programmer Tom!

You say that but I don't believe you. I take a wider view that everyone should be able to shape and control these computer things to do what we want.

It's really not your fault that currently "programming a computer" can be quite a difficult thing at times, which is why when I see anything that levels the playing field in technology terms I can get quite enthusiastic.

I get excited, not because it means there'll be a heap less mess in IT Services but because my experience shows me that if you give people good enough tools, they do amazing things with them.

What I tried to do

My aim was to quickly create a very simple aggregator in Google App Engine that would would pull together any (blog) feeds from  people at the University of York. There isn't (in my opinion) a good aggregator out there that doesn't require some sensitive hosting and effort, so I thought that the work involved might be minimal ( how hard could it be? )...

I thought it would mean that we ( meaning me, the Web Office and Anthony ) could have total control over the look and feel and maybe roll in a few "innovations" like tag clouds, maps perhaps, or search phrase based RSS feeds, so that "if anyone blogs about genomes at the university then add it to my news feed". That feature alone would be really useful in selling the usefulness of Google Reader.


What I actually did

I downloaded the SDK, read the documentation very carefully and set about getting myself a setup that I could envisage someone else ( as dim as I from a geeky perspective ) using.

Get The Right Editor

I quickly ditched using a text editor with the GoogleAppEngineLauncher.app because if you use the Eclipse based Aptana editor, not only do you get code completion but you also get step-thru debugging!

MacOS X users. When installing Aptana and creating an application this is currently (probably) the path to where the SDK is located. The tutorials I used were out of date...

/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine

Found a "Bug" in Feedparser

This was almost a show stopper. There are lots of people complaining about it. The problem is that if you try to parse an RSS feed from Blogspot then Feedparser blows up. I discovered that, if you did this...

f = feedparser.parse(self.url)

...rather than this...

content = unicode(result.content, 'utf-8')
f = feedparser.parse(content)

... that it works fine. I only mention it because someone might find it useful. Being able to read blog feeds is kind of essential to an aggregator.


Got Into A Mess

So. This is the thing. 

If I am right, that Google AppEngine is a contender for people at the University to use it to develop small applications, then I need to be able to understand it to a level where I would be happy teaching it. Perhaps to people with little programming experience.

1. Google App Engine doesn't use a "normal" MySQL database, it uses something similar called the datastore. This is good and bad.

Different. Searching. Paging. Cost -> Caching. 
De-normalized the tag's count ( of entries using that tag ).


The End Result

The end result, which may or may not be running, is available here http://uoynews.appspot.com/ .

It's a simple aggregator that should update feeds, could be made to look very pretty, and could have some cute features ( like search query RSS feeds mentioned earlier ).


BUT, I have already had to think about costing and caching and database writes and have no idea how to implement text searching ( in a way that won't affect costing and caching and database writes ). I'd used up my quota for the day of database writes and only I was using the application!





In short, in trying to get to the simple stuff ( making a quick application ) I've had to both learn, and design around some quite hairy concepts. It's not quite the simple application making framework I'd hoped for. I think what I was looking for was Django-Lite™but 

It's knocked my belief that there could be an application creating tool for "the rest of us" because if anyone was going to create one, it'd be Google.

I feel a bit sad really. I really wanted App Engine to be a place where anyone could lash their ideas into something hosted online quickly, but the process felt all too familiar to someone that isn't that great at coding... It was bitty, messy and complex.

I've created the thing I wanted to create but I'm not proud of it. It's only 300 lines of python and a few template files but it already feels cumbersome and weighed down by the nitty gritty of geekery.


















Tuesday, April 24, 2012

Scraping The Festival of Ideas, June 2012

I noticed something on Twitter about the University's Festival of Ideas and thought I'd take a look at the events listing. Not long ago, the Web Office used to put microformat information in web pages so that I could easily add events to my calendar... Either they've stopped doing that, or it's stopped working, so I thought how easy would it be to grab the events listed and add them to my (or a separate calendar).

In order to do this, I'd need to...

  1. Scrape the HTML from the web page and find the event data
  2. Connect to Google Calendar and add the events found


Because I like programming in python, the first thing I did was to go get the latest copy of BeautifulSoup, which is a library that is unbelievably handy for scraping data out of HTML and also Google GData which lets me talk to Google Calendar.

I so I began...


import urllib, urlparse, gdata, time, datetime
from bs4 import BeautifulSoup
import atom
import gdata.calendar
import gdata.calendar.service

... and loaded the libraries.  Then I connected to Google Calendar, like this...


print "Connecting to Google Calendar"
calendar_service = gdata.calendar.service.CalendarService()
calendar_service.email = '*********@york.ac.uk'
calendar_service.password = '**********'
calendar_service.source = 'Google-Calendar_Python_Sample-1.0'
calendar_service.ProgrammaticLogin()



 .... then got the web page with the Festival of Ideas events on it like this...


url = 'http://yorkfestivalofideas.com/talks/'
print "reading ", url
u = urllib.urlopen( url )
html = u.read()

... At this point, I knew I wanted to create a separate calendar, so I made one in Google Calendar ( IMPORTANT! Set the timezone of your newly created calendar!!! ). Once I'd done this, I could then find what's called the calendar link which you use to specify which calendar you want events to go into...


def get_my_calendars_url(cal_name):
feed = calendar_service.GetOwnCalendarsFeed()
for i, a_calendar in enumerate(feed.entry):
name = a_calendar.title.text
print i, a_calendar.title.text, a_calendar.link[0].href
if name == cal_name:
return a_calendar.link[0].href


calendar_link = get_my_calendars_url("Festival of Ideas")




So, now I have some HTML with useful information in it and a way of connecting to my chosen calendar... I need to use Beautiful soup to fish out the data I need.  I begin like this...


soup = BeautifulSoup( html )
events = soup.find_all("div", {'class':'event'})


... Now the HTML has been turned into a "soup" which means I can do fancy things with it... like the 2nd line above where I grab any DIV that is of class "event" from code that looks like this..


<div class="event">
<div class="eventdate">
<div class="day">
Thu
</div>
<div class="date">
14
</div>
<div class="month">
Jun
</div>
</div>
<div class="eventdetails">
<p class="eventtitle">
<a href="/talks/2012/frenck/">
Where it all began: The Big Bang
</a>
</p>
<p class="eventteaser">
Professor Carlos Frenk will open this year's York Festival of Ideas with a talk on the biggest metamorphosis of all - that of the universe as a whole, from the simplicity of the Big Bang to the complexity of the universe of galaxies, stars, and the planet on which we live.
</p>
</div>
<div class="clear"></div>


...Once I've got a list of events I can then do this... which finds the title, and the text and the dates and times of the events....



for event in events:
try:
title = event.find('p', {'class':'eventtitle'}).find('a').contents[0].strip()
href = event.find('p', {'class':'eventtitle'}).find('a')['href']
href = urlparse.urljoin(url, href)

#Get the actual page in the href!
u = urllib.urlopen( href )
event_html = u.read()
small_soup = BeautifulSoup(event_html)
start_time = small_soup.find('abbr', {'class':'dtstart'})['title']
st = time.strptime(start_time, "%Y-%m-%dT%H:%M")
end_dt = datetime.datetime(2012, st.tm_mon, st.tm_mday, st.tm_hour+2, 0, 0)
end_time = end_dt.strftime("%Y-%m-%dT%H:%M:%S")
start_time = start_time + ":00" #HACK UG!


teaser = event.find('p', {'class':'eventteaser'}).contents[0].strip()
teaser =  teaser + "\n\n" + href

print "creating event:", title
print create_event(title, teaser, "York, UK", start_time, end_time) 

print "_" * 80
except Exception, err:
print err


.... and the create_event code, which uses that calendar_link mentioned earlier, is...


def create_event( title='A lovely event', 
    content='Some text about it', 
    where='York, UK', start_time=None, end_time=None):


    event = gdata.calendar.CalendarEventEntry()
    event.title = atom.Title(text=title)
    event.content = atom.Content(text=content)
    
    #time_zone = 'Europe/London'
    #event.timezone = gdata.calendar.data.TimeZoneProperty(value=time_zone)
    event.where.append(gdata.calendar.Where(value_string=where))


    if start_time is None:
      # Use current time for the start_time and have the event last 1 hour
      start_time = time.strftime('%Y-%m-%dT%H:%M:%S.000Z', time.gmtime())
      end_time = time.strftime('%Y-%m-%dT%H:%M:%S.000Z', time.gmtime(time.time() + 3600))
    event.when.append(gdata.calendar.When(start_time=start_time, end_time=end_time))


    new_event = calendar_service.InsertEvent(event, calendar_link)


    return new_event



... Putting it all together I got a events that can be displayed in a fairly rubbishy widget ( go to June 2012 to see the events! ) or a calendar that anyone can browse here.

https://www.google.com/calendar/embed?src=york.ac.uk_9d9et5aruukobiaqpgke4n63rk@group.calendar.google.com&ctz=Europe/London&gsessionid=OK









The End Result?


To be honest, presentation isn't Google Calendar's strongpoint is it? It's fugly. It's all about the utility though... and I suppose making sure you get to those events.

I guess my point was, and is, that more of this sort of data should be ending up in places that I can use it, i.e in Google Calendar rather than hiding on a web page somewhere. Maybe this little bit of code will help someone to get their events in a more usable form.



Wednesday, April 4, 2012

The Blogger vs Wordpress debate



I have had four people THIS WEEK (and it's only Wednesday) come to me to ask about the University of York's blogging options... that we don't have.

The Social Policy Research Unit wanted to start blogging this week, so I showed them Blogger. Within minutes they'd created a blog, mimicked their dept's colours and added the logo and started adding content. Interestingly, to me, they're using tags/labels to manage the main navigation.


Wordpress. We Simply Don't Have The Manpower

There are two compelling arguments FOR Wordpress. People know, use and like it and from a branding perspective - it is easy to create "York Blogs" with a locked down design created by the Web Office.

We have been looking at both internally hosted Wordpress and buying a Wordpress service from Page.ly. The result of this seems to be that we don't have the man power to host Wordpress ourselves ( and also back it up, running a development/test version AND keep the site maintained etc. ).

Connecting a hosted Multi-site Wordpress service with our authentication system isn't straightforward. The LDAP plugins don't work out of the box ( although a Google authentication plugin may be a solution, but this is completely untested... and we don't really have the manpower to test this with any sense of due diligence ).

The hosting of the Wordpress install may also become a pain. If terribly popular then configuring and tweaking the caching needed is not non trivial.

One of the big pluses of Wordpress is its hackability and the ability to add useful plugins. When this is done at an organisational level we can't simply add any requested plugin not knowing how it may impact other sites from both a design and security perspective. We would need to evaluate the shared need and impact of adding the sort of "quick" hacks that most people used to working with Wordpress take for granted. Then we'd need to test them.

Blogger Got A Whole Heap Better. They May Have The Manpower

Whilst we have been looking at the Wordpress options, Blogger ( owned by Google ) has improved significantly. If you haven't noticed York has "gone Google". Furthermore, recently it seems that Blogger now has Google+ integration ( automatically offering you the option to publish a  link to Google+ and showing your blogs on your Google+ that you are a "Contributor to" ).

As you can see from this blog, although I have probably annoyed the dickens out of Dan Wiggle ( and rightly so, it's just an experiment Dan ) with regards to the design, I have with a few tweaks addressed to some degree the branding issues of look n feel and that horrible "Next Blog" link in the navbar ( come on Google, get that fixed! ).

So Which Is Best?

I challenged one of our most articulate Wordpress fans ( who shall remain Ned-less ) recently and asked him to clearly state what the actual advantages of Wordpress were. And, I may be wrong, despite him trying, he could come up with one concrete advantage except for the fact that people both like and know it. Which is a very, very good reason I know but...

Blogger
Wordpress
Personal Ownership
From an organisational perspective, until Google properly integrates with Google Apps then a Blogger blog better suits the needs of an individual. A person can change job and take their blog with them.

Personal "ownership" of a blog is often a key motivator anyway.
If we ran Wordpress, then, when people leave, we would still have control of their blog and content.

People could export all their blog content (as an XML file and re-import it into a new blog).
Control
Although Blogger works with your York credentials, any content created can not be deleted by York staff.York Web Admin staff could monitor blog created and offer advice, or indeed take down any appropriate content.
Branding
Some branding is possible ( see this blog ).

Removing the nav-bar is probably against Blogger's policies, but a. Come on Google sort out the Next Blog issue and b. What if you don't remove but set its top to -30px?

Note: Breaking the ToS can mean removal of Blogs without warning. 
Full branding is possible so that people don't need to worry about this. Many projects and depts would prefer this to having to hack the designs themselves.
Hacking Risk
Whilst Blogger blogs do get hacked, this is normally if someone's password has been guessed, which would also apply to WP. Most support issues would become Google's and not be ours.Wordpress sites do get hacked. Keeping the site up to date needs to be done regularly.

Lincoln seem to be able to manage this OK ( currently 614 blogs ). DoS attacks?

Even testing software updates ( moving content from LIVE to DEV - munging relative URLs in content ), moving the wp-content folder THEN applying the patch will become quite a task in terms of data size.
Monitoring Content & Serendipitous Navigation
One BIG problem with encouraging the creation of lots of Blogger blogs is that there is no way to keep a track of them. This is not from a control perspective, it prevents people from finding out what other related blog content there may be.

The irony here, is that IF we run with Blogger then it really raises a HUGE need for a central aggregator of these disperate blogs, both for monitoring content and to allow people to find other interesting or related blog content. Ideally this might have a TagCould like interface.
Were we to use the BuddyPress plugin in Wordpress ( like this site or Lincoln's above), all blog posts are listed.
Future ProofingWe can't guarantee that Google won't shut down Blogger tomorrow.We can't guarantee that Wordpress won't get bought by BigCorp tomorrow.
FeaturesFrom an editing perspective,  Blogger doesn't seem as powerful as WP.  Many of WP's "cool features" are often replicable with Blogger's Gadgets.Remember, pretty much all WYSIWYG editors suck. 
CostFree. No dodgy ads.$29.97 - Go Ad-Free
$30.00 - Custom Design
 $12.00 - Domain name transfer
 ______
$71.97 = TOTAL


My Conclusion

I begun trying to get Wordpress running over two years ago. I was an ardent fan of Wordpress. Now I'm not so sure.

For me, our Blogger vs Wordpress debate comes down to:

  • Manpower - to it properly
  • Control - an the need to be able to at least show people where the other blogs are

This suggests to me that WHATEVER WE CHOSE, before we chose it, we need a central aggregator to pull together disparate blog content into one lovely tag cloud.









Just a test.

Nothing to see here move along please. Although do come back. You never know.

This blog is an experiment to see how, using minimal intervention, a York brand echo can be achieved. After attempting to hide the "Next Blog" link in the navbar and failed, I have simply hidden the navbar. I did make it fade out and fade in when your mouse went to the top of the screen, but having a random "Next Blog" link is simply not acceptable. Come on Google, sort this out!

I have attempted to replicate the IT Services colours, but how close to the real site this blog SHOULD be is moot.

The aim here is just to see if all of this is possible.

 

© 2013 Klick Dev. All rights resevered.

Back To Top