Pages

Showing posts with label data. Show all posts
Showing posts with label data. Show all posts

Thursday, March 12, 2015

Announcing Version 1 8 of the NET library for Google Data APIs

We just released version 1.8 of the .NET Library for Google Data APIs which adds brand new service classes and samples for the following three APIs:
  • Google Apps Audit API
  • Google Content API for Shopping
  • Google Calendar Resource API
The library also extends the Email Settings API service to implement new functionality to retrieve the existing settings, support new filter actions and manage email delegation.

In order to improve security and stability, SSL is now turned on by default for all APIs that support it and, since the previous major release (1.7.0.1), more than 30 issues have been Fixed.

For all details, please check the Release Notes:
http://google-gdata.googlecode.com/svn/trunk/clients/cs/RELEASE_NOTES.HTML

Want to weigh in on this topic? Discuss on Buzz

Read more »

Monday, March 9, 2015

Python OAuth 2 0 Google Data APIs

Since March of this year, Google has supported OAuth 2.0 for many APIs, including Google Data APIs such as Google Calendar, Google Contacts and Google Documents List. Googles implementation of OAuth 2.0 introduces many advantages compared to OAuth 1.0 such as simplicity for developers and a more polished user experience.

We’ve just added support for this authorization mechanism to the gdata-python-client library-- let’s take a look at how it works by retrieving an access token for the Google Calendar and Google Documents List APIs and listing protected data.

Getting Started

First, you will need to retrieve or sync the project from the repository using Mercurial:

hg clone https://code.google.com/p/gdata-python-client/

For more information about installing this library, please refer to the Getting Started With the Google Data Python Library article.

Now that the client library is installed, you can go to your APIs Console to either create a new project, or use information about an existing one from the API Access pane:

Getting the Authorization URL

Your application will require the user to grant permission for it to access protected APIs on their behalf. It must redirect the user over to Googles authorization server and specify the scopes of the APIs it is requesting permission to access.

Available Google Data API’s scopes are listed in the Google Data FAQ.

Heres how your application can generate the appropriate URL and redirect the user:

import gdata.gauth

# The client id and secret can be found on your API Console.
CLIENT_ID =
CLIENT_SECRET =

# Authorization can be requested for multiple APIs at once by specifying multiple scopes separated by # spaces.
SCOPES = [https://docs.google.com/feeds/, https://www.google.com/calendar/feeds/]
USER_AGENT =

# Save the token for later use.
token = gdata.gauth.OAuth2Token(
client_id=CLIENT_ID, client_secret=CLIENT_SECRET, scope= .join(SCOPES),
user_agent=USER_AGENT)

# The “redirect_url” parameter needs to match the one you entered in the API Console and points
# to your callback handler.
self.redirect(
token.generate_authorize_url(redirect_url=http://www.example.com/oauth2callback))

If all the parameters match what has been provided by the API Console, the user will be shown this dialog:

When an action is taken (e.g allowing or declining the access), Google’s authorization server will redirect the user to the specified redirect URL and include an authorization code as a query parameter. Your application then needs to make a call to Google’s token endpoint to exchange this authorization code for an access token.

Getting an Access Token

import atom.http_core

url = atom.http_core.Uri.parse_uri(self.request.uri)
if error in url.query:
# The user declined the authorization request.
# Application should handle this error appropriately.
pass
else:
# This is the token instantiated in the first section.
token.get_access_token(url.query)

The redirect handler retrieves the authorization code that has been returned by Google’s authorization server and exchanges it for a short-lived access token and a long-lived refresh token that can be used to retrieve a new access token. Both access and refresh tokens are to be kept private to the application server and should never be revealed to other client applications or stored as a cookie.

To store the token object in a secured datastore or keystore, the gdata.gauth.token_to_blob() function can be used to serialize the token into a string. The gdata.gauth.token_from_blob() function does the opposite operation and instantiate a new token object from a string.

Calling Protected APIs

Now that an access token has been retrieved, it can be used to authorize calls to the protected APIs specified in the scope parameter.

import gdata.calendar.client
import gdata.docs.client

# Access the Google Calendar API.
calendar_client = gdata.calendar.client.CalendarClient(source=USER_AGENT)
# This is the token instantiated in the first section.
calendar_client = token.authorize(calendar_client)
calendars_feed = client.GetCalendarsFeed()
for entry in calendars_feed.entry:
print entry.title.text

# Access the Google Documents List API.
docs_client = gdata.docs.client.DocsClient(source=USER_AGENT)
# This is the token instantiated in the first section.
docs_client = token.authorize(docs_client)
docs_feed = client.GetDocumentListFeed()
for entry in docs_feed.entry:
print entry.title.text

For more information about OAuth 2.0, please have a look at the developer’s guide and let us know if you have any questions by posting them in the support forums for the APIs you’re accessing.



Alain Vongsouvanh profile | events

Alain is a Developer Programs Engineer for Google Apps with a focus on Google Calendar and Google Contacts. Before Google, he graduated with his Masters in Computer Science from EPITA, France.

Updated 9/30/2011 to fix a small typo in the code

Read more »

Monday, February 16, 2015

Part I data warehousing Star schema from orcle documentation

This post is for collecting data ware housing concepts quick understanding to work with BI tools such as Talend ETL or Pentaho Kettle.
 Few of the other basic concepts will be included in Part-II to Part -V 
Schema 
A schema is a collection of database objects, including tables, views, indexes, and synonyms.

Schema models are used for designing data warehousing.

The Star schema
  1. It  is the simplest data warehouse schema. 
  2. Why it is called star schema ?
    Bz the diagram of star schema resembles as a star with points radiating from a center. 
  3. The center of the star consists of one or more fact tables and the points of the star are the dimension tables.
  4.  A star schema is characterized by one or more very large fact tables that contain the primary information in the data warehouse and a number of much smaller dimension tables (or lookup tables).
  5. Each dimension tables contains information about the entries for a particular attribute in the fact table.
Star Query:
  1. A star query is a join between a fact table and a number of lookup tables.
  2. Each lookup table is joined to the fact table using a primary-key to foreign-key join, but the lookup tables are not joined to each other.
Star join :
  1. A star join is a primary-key to foreign-key join of the dimension tables to a fact table. 
  2. The fact table normally has a concatenated index on the key columns to facilitate this type of join.

Advantages of Star Schema:
* Star schemas are denormalized
* That is ,the normal rules of normalization applied to transactional relational databases are relaxed during star schema design and implementation.
  • Simpler queries:
    •  Star schema join logic is generally simpler than the join logic required to retrieve data from a highly normalized transactional schemas.
  • Simplified business reporting logic
    •  when compared to highly normalized schemas, the star schema simplifies common business reporting logic, such as period-over-period and as-of reporting.
  • Query performance gains 
    • star schemas can provide performance enhancements for read-only reporting applications when compared to highly normalized schemas.
  • Fast aggregations 
    •  The simpler queries against a star schema can result in improved performance for aggregation operations.
  • Feeding cubes 
    •   star schemas are used by all OLAP systems to build proprietary OLAP cubes efficiently.
    •  most major OLAP systems provide a ROLAP mode of operation which can use a star schema directly as a source without building a proprietary cube structure.
Example-1:
  



References:
http://docs.oracle.com/cd/A87860_01/doc/server.817/a76994/schemas.htm
http://en.wikipedia.org/wiki/Star_schema

Read more »