Wednesday, May 9, 2012

Getting CSV data into Google Spreadsheets Automatically

8:43 AM

Today I was attempting to get CSV data from Estates' Alarm System into Google Docs as a spreadsheet. There were two ways to try and achieve this...


  1. Create an AppScript in Google that pulled a .CSV file from a web server
  2. Write a (python) script on the local machine that pushed the data into Google Spreadsheet by using the API.

The Google AppScript Way

As you know, my JavaScript ain't great, but it initially looked like it was going to work... Some code like this below and using the Array to CSV functions from here, looked promising.



function encode_utf8( s ){
  //This is the code that "I think" turns the UTF16 LE into standard stuff....
  return unescape( encodeURIComponent( s ) );
}

function get_csv() {
  var url = 'http://www-users.york.ac.uk/~admn812/alarms.csv.Active BA Alarms.csv'; // Change this to the URL of your file
  var response = UrlFetchApp.fetch(url);
  // If there's an error in the response code, maybe tell someone
  //MailApp.sendEmail("s.brown@york.ac.uk", "Error with CSV grabber:" + response.getResponseCode() , "Text of message goes here")
  Logger.log( "RESPONSE " + response.getResponseCode());
  var data = encode_utf8(response.getContentText().toString());  
  return data //as text  
}


function importFromCSV() {
  // This is the function to which you attach a trigger to run every hour  
  var rawData = get_csv(); // gets the data, makes it nice...
 
  var csvData = CSVToArray(rawData, "\t"); // turn into an array
  Logger.log("CSV ITEMS " + csvData.length);
 
  //Write data to first sheet in this spreadsheet
  var ss = SpreadsheetApp.getActiveSpreadsheet();
  var sheet = ss.getActiveSheet();
  //Logger.log(sheet);
 
  ////// From: https://developers.google.com/apps-script/articles/docslist_tutorial
 
  // I think this will write data from the 0th cell. It actually needs a line to select ALL the data and delete it,
  // in case there is less data than the previous import.
 
  for (var i = 0; i < csvData.length; i++) {
    sheet.getRange(i+1, 1, 1, csvData[i].length).setValues(new Array(csvData[i]));
     //this might be where you would look at the data and maybe...
    // cell.offset(i,i+2).setBackgroundColor("green");
    //Logger.log( "i:" + i + " " + csvData[i] );
  }
}

But I got stuck. I think it's because the CSV file was UTF-16 Little Endian, and my regular expressions wouldn't work.

The Python Way

The python way is completely different in that it runs on the same computer as where the CSV file and pushes the data into a Google Spreadsheet.

I found bugs if you have funnily named header rows ( CamelCaseOnlyPerhaps ). That I solved by adding one cell at a time. It's slow but reliable....

I also found that if I used a DictReader() it broke, so I just iterated through the lines and the items ( which seemed to work ).








import time, urllib, csv, re, time
from pprint import pprint
import gdata.spreadsheet.service

email = 'your.name@york.ac.uk'
password = '********'

'''Warning. This means of connecting to the GDATA API is being deprecated soon in favour of OAuth'''
spr_client = gdata.spreadsheet.service.SpreadsheetsService()
spr_client.email = email
spr_client.password = password
spr_client.source = 'Example CSV To Spreadsheet Writing Application'
spr_client.ProgrammaticLogin( )
spreadsheet_key = '0Ajnu7JgRtB5CdDFQeGM2YVZBNXROcC1vZ0xCQ2tVX1E'

data_url = 'http://www-users.york.ac.uk/~admn812/alarms.csv.Active BA Alarms.csv'
# All spreadsheets have worksheets. I think worksheet #1 by default always have a value of 'od6'
worksheet_id = 'od6'

#### Examples from http://pseudoscripter.wordpress.com/2011/05/09/automatically-update-spreadsheets-and-graphs/ ####
def _CellsUpdateAction(row,col,inputValue,key,wksht_id):
    '''You "can" update an entire row, or rows even with a dict(array) or list of them, but I got a bizarre error when doing so, so in an attempt to find the nasty cell, do it a cell at a time'''
    entry = spr_client.UpdateCell(row=row, col=col, inputValue=inputValue,
            key=key, wksht_id=wksht_id)
    if isinstance(entry, gdata.spreadsheet.SpreadsheetsCell):
        print row,",", col, 'updated:', inputValue

def _PrintFeed(feed):
    '''Just a way to iterate through what's available'''
    for i, entry in enumerate(feed.entry):
        if isinstance(feed, gdata.spreadsheet.SpreadsheetsCellsFeed):
            print '
%s %s\n' % (entry.title.text, entry.content.text)
        elif isinstance(feed, gdata.spreadsheet.SpreadsheetsListFeed):
            print '
%s %s %s' % (i, entry.title.text, entry.content.text)
            #Print this row'
s value for each column (the custom dictionary is
            # built using the gsx: elements in the entry.)
            print 'Contents:'
            for key in entry.custom:
              print '  %s: %s' % (key, entry.custom[key].text)
            print '\n',
        else:
            # THIS ONE!
            print '%s %s, %s' % (i, entry.title.text, str(entry.id.text))
            #print dir(entry)

def show_my_spreadsheets():
    print "My spreadsheets are..."
    feed = spr_client.GetSpreadsheetsFeed()
    _PrintFeed(feed)

def replace(text, look_for, replace_with=''):
    reg = look_for
    p = re.compile(reg, re.IGNORECASE | re.DOTALL)
    t = p.sub(replace_with, text)
    return t
       
def match(s, reg):
    p = re.compile(reg, re.IGNORECASE| re.DOTALL)
    results = p.findall(s)
    return results
   
def get_data(data_url=data_url):
    u = urllib.urlopen(data_url)
    data = u.read()
    data = data.decode("utf-16 LE")
    return data
   
def get_data_from_file(filepath):
    'This should work, not tested it.'
    f = open(filepath)
    data = f.read()
    f.close()
   
    data = data.decode("utf-16 LE")
    return data
   
def write_data( dict ):
    'Not used'
    entry = spr_client.InsertRow(dict, spreadsheet_key, worksheet_id)
    if isinstance(entry, gdata.spreadsheet.SpreadsheetsList):
      print "Insert row succeeded."
    else:
      print "Insert row failed."

def run():
    filepath = '/Users/tomsmith/Downloads/alarms.csv.Active BA Alarms (8).csv'   
    data = get_data() # or ... data = get_data_from_file("C:/myfolder/mycsv.csv")
    #Strip the first junky stuff off...
    data = match( data, '"Time of last Change.*')[0]
   
    # I chose to add the field headers by hand. You can do this on the fly, but
    # I found a bug if they had uppercase letters. Grr!
    fields = ["Time of last Change","Category","Technical Description","Status","Priority","Alarm Value","Alarm Message",]
   
   
    # Write header row
    for  f,field in enumerate(fields):
        _CellsUpdateAction(1,f+1,field,spreadsheet_key,worksheet_id)
       
    ## Now write the data, cell by cell
    data = data.split("\n")
    for l, line in enumerate(data):
        if l == 0:
            pass #the header line
        else:
            items = line.split("\t")
            the_dict = {}
            print "Line:", l #remember, zero-based
           
            for i, item in enumerate(items):
                #Agh, line 153 in the data doesn't have enough items
                try:
                    the_value = item.replace('"', '') #Strip quotes off the beginning/end
                   
                    #print the_value
                    header_name = fields[i].lower().replace(" ", "").replace("'", "")
                    the_dict[ header_name ] = str(the_value)
                   
                    _CellsUpdateAction(l+2,i+1,the_value,spreadsheet_key,worksheet_id)
                except Exception, err:
                    #print "\t Line", l, "only has", len(item), "items", items
                    #print err
                    pass
           
            #time.sleep(1) #Give Google a chance to catch up a bit
       
   
if __name__ == '__main__':
    #show_my_spreadsheets()
    run( )

The End Result

Is shocking really. There's no error checking and you have to put your credentials in for it to work... but it works! And it means Estates can continue connecting data from various applications into one visualisation dashboard. Hopefully more on that later.




I believe that it should be possible to grab data using AppScript ( the pull approach ) but I was beaten by unicode text formats and rudimentary JavaScript skills. This approach does require the CSV file to be available online, which is a difficulty, a complication or at best, a security challenge.













Written by

We are one of the initiators of the development of information technology in understanding the need for a solution that is familiar and close to us.

0 comments:

Post a Comment

 

© 2013 Klick Dev. All rights resevered.

Back To Top