Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transition from 2to3 to use of six.py to be merged in 5.0.0-dev #519

Merged
merged 78 commits into from
Feb 1, 2017
Merged
Show file tree
Hide file tree
Changes from 73 commits
Commits
Show all changes
78 commits
Select commit Hold shift + click to select a range
cd41c2c
six: added six to requirements
joernhees Nov 24, 2014
1d41460
six: modify setup.py so we can sequentially change files from 2to3 to…
joernhees Nov 26, 2014
9157c47
six: transformed and extended py3compat to be used as six wrapper
joernhees Nov 26, 2014
11a7a28
six: fix tuple assignment in method signature for py3
joernhees Nov 26, 2014
8cbeae2
six: __future__ headers
joernhees Nov 26, 2014
d96a31a
cleanup: import of hashlib.md5
joernhees Nov 26, 2014
0cb6444
six: relative imports for compat and py3compat
joernhees Nov 26, 2014
58b7bd1
six: use urljoin, urlquote, urldefrag from six.moves
joernhees Nov 26, 2014
5afd83c
six: use PY2, PY3 from six, __nonzero__/__bool__ compatibility
joernhees Nov 26, 2014
2718777
six: unicode --> text_type
joernhees Nov 26, 2014
fa67ba2
six: basestring --> string_types
joernhees Nov 26, 2014
0079314
six: long --> long_type
joernhees Nov 26, 2014
a20f5c2
six: term.py done
joernhees Nov 26, 2014
617754c
six: __init__.py done
joernhees Nov 26, 2014
331d5ab
six: added list of python files which are 2to3 transformed.
joernhees Nov 27, 2014
502d091
six: collection.py headers and exception syntax
joernhees Nov 27, 2014
7603e57
six: compare.py: headers, unicode
joernhees Feb 19, 2015
5d2da12
six: events.py: headers, handling of .keys()
joernhees Nov 27, 2014
52df5e6
six: plugin.py: headers, handling of iteritems
joernhees Nov 27, 2014
06cb5bf
six: py3compat imports BytesIO and StringIO from six
joernhees Nov 28, 2014
1cba91d
six: query.py: headers
joernhees Nov 28, 2014
6791744
six: query.py: imports fixed
joernhees Nov 28, 2014
cc5d61d
six: query.py: unicode --> text_type
joernhees Nov 28, 2014
60520e6
six: query.py: __nonzero__ --> __bool__ handling
joernhees Nov 28, 2014
739423b
six: query.py done
joernhees Nov 28, 2014
4fca699
six: resource.py: headers, unicode
joernhees Nov 28, 2014
3999d7f
six: py3compat import cPickle
joernhees Nov 28, 2014
6348976
six: store.py: headers, pickle, exceptions, method sig tuple assignments
joernhees Nov 28, 2014
5c2c4de
six: util.py: headers, StringIO
joernhees Nov 28, 2014
0c533d0
six: py3compat.py: use efficient py2 BytesIO version
joernhees Nov 28, 2014
ed4a877
six: graph.py: headers, imports
joernhees Nov 28, 2014
3c4c9ba
six: graph.py: method sig tuple assignments, iterators
joernhees Nov 28, 2014
b989590
six: py3compat.py: added six.moves.urllib.request.pathname2url
joernhees Nov 29, 2014
c7233c4
six: namespace.py: headers, imports
joernhees Nov 29, 2014
784f740
six: namespace.py: unicode, string_types
joernhees Nov 29, 2014
74c3877
six: namespace.py: xrange --> range
joernhees Nov 29, 2014
80d8162
six: namespace.py: removed unnecessary u"" prefix
joernhees Nov 29, 2014
0b3c1d7
six: namespace.py done
joernhees Nov 29, 2014
57d36e8
six: parser.py: headers, imports, unicode, basestring
joernhees Nov 29, 2014
75a8fe5
six: run_tests.py: cleanup, py3 compatible
joernhees Nov 30, 2014
5c92c70
six: run_tests_py3.sh updated to base on setup.py build & use run_tes…
joernhees Nov 30, 2014
a2c0ff2
six: extras/describer.py: headers, rel imports
joernhees Dec 1, 2014
65fb676
six: extras/infixowl.py: headers
joernhees Dec 1, 2014
0cd6546
six: memory.py: headers, method sig tuple assignments
joernhees Dec 2, 2014
58c222d
six: py3compat.py: handlers for iteritems, iterkeys, itervalues
joernhees Dec 2, 2014
1e0780b
six: memory.py: handling of iteritems
joernhees Dec 2, 2014
4b3d479
six: py3compat.py: added binary_type and unichr
joernhees Dec 3, 2014
5972b4f
six: parsers/notation3.py: hearders; str, unicode, long, unichr, prin…
joernhees Dec 3, 2014
2a8c084
six: parsers/nquads.py: headers, exception syntax
joernhees Dec 4, 2014
77fd117
six: util.py: cleanup unused StringIO
joernhees Dec 4, 2014
b93cddb
six: compare.py: headers and unicode --> text_type
joernhees Feb 19, 2015
1746c32
trivial py3 conversion of tools
gromgull Nov 19, 2016
f007dac
converted all base serialisers and parsers
gromgull Nov 19, 2016
fbf32e1
converted examples
gromgull Nov 19, 2016
15d74d6
converted sleepycat store
gromgull Nov 19, 2016
3300505
converted sparql engine
gromgull Nov 19, 2016
8798f1b
converted sparql results files
gromgull Nov 19, 2016
afd3fb4
converted stores
gromgull Nov 19, 2016
402dab1
converted rest of microdata classes
gromgull Nov 19, 2016
56f6426
converted rest of rdfaA files
gromgull Nov 21, 2016
99dcb42
converted docs/plugintable
gromgull Nov 21, 2016
25fe5ed
converted csv2rdf
gromgull Nov 21, 2016
8bf3523
tagged sparqlstore as converted and fixed example
gromgull Nov 21, 2016
c267023
converted all tests
gromgull Nov 21, 2016
b8b4347
removed no longer existing files from skiplist
joernhees Jan 28, 2017
73c0a3a
six: sparql engine: leftover iteritems, itervalues, text_type
joernhees Jan 28, 2017
b16108d
six: turtle: text_type
joernhees Jan 28, 2017
bfc4f19
six: sparql aggregates no longer relies on map returning a list
joernhees Jan 28, 2017
60adef1
six: add still un-sixed files
joernhees Jan 28, 2017
e285994
converted last test files
gromgull Jan 30, 2017
08e5cb8
removed special py3 handling code from setup.py and travis files
gromgull Jan 30, 2017
ff4616e
restored find_version in setup.py
gromgull Jan 30, 2017
bfcde84
removed most of the six import from py3compat
gromgull Jan 30, 2017
2ebecb6
converted csv2rdf to six
gromgull Jan 30, 2017
523044c
reverted setuptools upgrade removal in travis.yml
gromgull Jan 30, 2017
6916362
last remaining unichr from py3compat
gromgull Jan 30, 2017
1d01e1e
remove format_doctest from py3compat
gromgull Jan 31, 2017
96ff354
moved all compat code to rdflib.compat
gromgull Jan 31, 2017
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 3 additions & 12 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,29 +14,20 @@ python:
- 3.4
- 3.5
- 3.6
# - "pypy"

before_install:
- pip install -U setuptools pip # seems travis comes with a too old setuptools for html5lib
- bash .travis.fuseki_install_optional.sh

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bugger, this wasn't mean to go in this commit.

install:
- if [[ ${TRAVIS_PYTHON_VERSION%%.*} == '2' ]]; then pip install --default-timeout 60 -r requirements.py2.txt; fi
- if [[ ${TRAVIS_PYTHON_VERSION%%.*} == '3' ]]; then pip install --default-timeout 60 -r requirements.py3.txt; fi
# isodate0.4.8 is problematic with Pypy, use fixed version
- if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then pip install --upgrade "https://bitbucket.org/gjhiggins/isodate/downloads/isodate-0.4.8.tar.gz"; pip install --default-timeout 60 "elementtree"; fi
- pip install --default-timeout 60 -r requirements.txt
- pip install --default-timeout 60 coverage coveralls nose-timer && export HAS_COVERALLS=1
- python setup.py install

before_script:
- if [[ $TRAVIS_PYTHON_VERSION == '2.7' ]] || [[ $TRAVIS_PYTHON_VERSION == '3.5' ]]; then flake8 --exclude=pyRdfa,extras,host,transform,rdfs,sparql,results,pyMicrodata --exit-zero rdflib; fi
- flake8 --exclude=pyRdfa,extras,host,transform,rdfs,sparql,results,pyMicrodata --exit-zero rdflib

script:
# Must run the tests in build/src so python3 doesn't get confused and run
# the python2 code from the current directory instead of the installed
# 2to3 version in build/src.
- if [[ ${TRAVIS_PYTHON_VERSION%%.*} == '2' ]]; then PYTHONWARNINGS=default nosetests --with-timer --timer-top-n 42 --with-coverage --cover-tests --cover-package=rdflib ; fi
- if [[ ${TRAVIS_PYTHON_VERSION%%.*} == '3' ]]; then PYTHONWARNINGS=default nosetests --with-timer --timer-top-n 42 --with-coverage --cover-tests --cover-package=build/src/rdflib --where=./build/src; fi
- PYTHONWARNINGS=default nosetests --with-timer --timer-top-n 42 --with-coverage --cover-tests --cover-package=rdflib

after_success:
- if [[ $HAS_COVERALLS ]] ; then coveralls ; fi
Expand Down
17 changes: 8 additions & 9 deletions docs/plugintable.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""
Crappy utility for generating Sphinx tables
Crappy utility for generating Sphinx tables
for rdflib plugins
"""

Expand All @@ -11,23 +11,22 @@

p = {}

for (name, kind), plugin in _plugins.items():
for (name, kind), plugin in _plugins.items():
if "/" in name: continue # skip duplicate entries for mimetypes
if cls == kind.__name__:
if cls == kind.__name__:
p[name]="%s.%s"%(plugin.module_path, plugin.class_name)

l1=max(len(x) for x in p)
l2=max(10+len(x) for x in p.values())

def hr():
print "="*l1,"="*l2
print("="*l1,"="*l2)

hr()
print "%-*s"%(l1,"Name"), "%-*s"%(l2, "Class")
print("%-*s"%(l1,"Name"), "%-*s"%(l2, "Class"))
hr()

for n in sorted(p):
print "%-*s"%(l1,n), ":class:`~%s`"%p[n]
for n in sorted(p):
print("%-*s"%(l1,n), ":class:`~%s`"%p[n])
hr()
print

print()
4 changes: 2 additions & 2 deletions examples/conjunctive_graphs.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,6 @@

# query the conjunction of all graphs

print 'Mary loves:'
print('Mary loves:')
for x in g[mary : ns.loves/ns.hasName]:
print x
print(x)
12 changes: 5 additions & 7 deletions examples/custom_datatype.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""

RDFLib can map between data-typed literals and python objects.
RDFLib can map between data-typed literals and python objects.

Mapping for integers, floats, dateTimes, etc. are already added, but
you can also add your own.
Expand All @@ -17,9 +17,9 @@
if __name__=='__main__':

# complex numbers are not registered by default
# no custom constructor/serializer needed since
# no custom constructor/serializer needed since
# complex('(2+3j)') works fine
bind(XSD.complexNumber, complex)
bind(XSD.complexNumber, complex)

ns=Namespace("urn:my:namespace:")

Expand All @@ -39,8 +39,6 @@

l2=list(g2)[0][2]

print l2

print l2.value == c # back to a python complex object

print(l2)

print(l2.value == c) # back to a python complex object
2 changes: 1 addition & 1 deletion examples/custom_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,4 @@ def customEval(ctx, part):
# Find all FOAF Agents
for x in g.query(
'PREFIX foaf: <%s> SELECT * WHERE { ?s a foaf:Agent . }' % FOAF):
print x
print(x)
28 changes: 14 additions & 14 deletions examples/film.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#!/usr/bin/env python
"""
"""

film.py: a simple tool to manage your movies review
Simon Rozet, http://atonie.org/
Expand All @@ -10,7 +10,7 @@
- handle non IMDB uri
- markdown support in comment

Requires download and import of Python imdb library from
Requires download and import of Python imdb library from
http://imdbpy.sourceforge.net/ - (warning: installation
will trigger automatic installation of several other packages)

Expand All @@ -25,14 +25,14 @@
"""
import datetime, os, sys, re, time

try:
try:
import imdb
except ImportError:
except ImportError:
imdb = None

from rdflib import BNode, ConjunctiveGraph, URIRef, Literal, Namespace, RDF
from rdflib.namespace import FOAF, DC

from six.moves import input

storefn = os.path.expanduser('~/movies.n3')
#storefn = '/home/simon/codes/film.dev/movies.n3'
Expand All @@ -53,10 +53,10 @@ def __init__(self):
self.graph.bind('foaf', FOAF)
self.graph.bind('imdb', IMDB)
self.graph.bind('rev', 'http://purl.org/stuff/rev#')

def save(self):
self.graph.serialize(storeuri, format='n3')

def who(self, who=None):
if who is not None:
name, email = (r_who.match(who).group(1), r_who.match(who).group(2))
Expand All @@ -67,14 +67,14 @@ def who(self, who=None):
self.save()
else:
return self.graph.objects(URIRef(storeuri+'#author'), FOAF['name'])

def new_movie(self, movie):
movieuri = URIRef('http://www.imdb.com/title/tt%s/' % movie.movieID)
self.graph.add((movieuri, RDF.type, IMDB['Movie']))
self.graph.add((movieuri, DC['title'], Literal(movie['title'])))
self.graph.add((movieuri, IMDB['year'], Literal(int(movie['year']))))
self.save()

def new_review(self, movie, date, rating, comment=None):
review = BNode() # @@ humanize the identifier (something like #rev-$date)
movieuri = URIRef('http://www.imdb.com/title/tt%s/' % movie.movieID)
Expand All @@ -91,7 +91,7 @@ def new_review(self, movie, date, rating, comment=None):

def movie_is_in(self, uri):
return (URIRef(uri), RDF.type, IMDB['Movie']) in self.graph

def help():
print(__doc__.split('--')[1])

Expand Down Expand Up @@ -121,22 +121,22 @@ def main(argv=None):
rating = None
while not rating or (rating > 5 or rating <= 0):
try:
rating = int(raw_input('Rating (on five): '))
rating = int(input('Rating (on five): '))
except ValueError:
rating = None
date = None
while not date:
try:
i = raw_input('Review date (YYYY-MM-DD): ')
i = input('Review date (YYYY-MM-DD): ')
date = datetime.datetime(*time.strptime(i, '%Y-%m-%d')[:6])
except:
date = None
comment = raw_input('Comment: ')
comment = input('Comment: ')
s.new_review(movie, date, rating, comment)
else:
help()

if __name__ == '__main__':
if not imdb:
if not imdb:
raise Exception('This example requires the IMDB library! Install with "pip install imdbpy"')
main()
8 changes: 4 additions & 4 deletions examples/foafpaths.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
in triple-patterns.

We overload some python operators on URIRefs to allow creating path
operators directly in python.
operators directly in python.

============ =========================================
Operator Path
Expand All @@ -13,7 +13,7 @@
``p1 | p2`` Path alternative
``p1 * '*'`` chain of 0 or more p's
``p1 * '+'`` chain of 1 or more p's
``p1 * '?'`` 0 or 1 p
``p1 * '?'`` 0 or 1 p
``~p1`` p1 inverted, i.e. (s p1 o) <=> (o ~p1 s)
``-p1`` NOT p1, i.e. any property but p1
============ =========================================
Expand All @@ -38,7 +38,7 @@

tim = URIRef("http://www.w3.org/People/Berners-Lee/card#i")

print "Timbl knows:"
print("Timbl knows:")

for o in g.objects(tim, FOAF.knows / FOAF.name):
print o
print(o)
31 changes: 18 additions & 13 deletions examples/graph_digest_benchmark.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,25 @@
#!/usr/bin/env python


'''
This benchmark will produce graph digests for all of the
downloadable ontologies available in Bioportal.
'''

from rdflib import *
from __future__ import print_function

from rdflib import Namespace, Graph
from rdflib.compare import to_isomorphic
from six.moves.urllib.request import urlopen
from six.moves import queue
import sys, csv
from urllib import *

from io import StringIO
from collections import defaultdict
from urllib2 import urlopen

from multiprocessing import *
from Queue import Empty

from multiprocessing import Process, Semaphore, Queue


bioportal_query = '''
PREFIX metadata: <http://data.bioontology.org/metadata/>
Expand Down Expand Up @@ -63,18 +68,18 @@ def worker(q, finished_tasks, dl_lock):
og = Graph()
try:
og.load(stats['download_url'])
print stats['ontology'], stats['id']
print(stats['ontology'], stats['id'])
ig = to_isomorphic(og)
graph_digest = ig.graph_digest(stats)
finished_tasks.put(stats)
except Exception as e:
print 'ERROR', stats['id'], e
print('ERROR', stats['id'], e)
stats['error'] = str(e)
finished_tasks.put(stats)
except Empty:
except queue.Empty:
pass
for i in range(int(threads)):
print "Starting worker", i
print("Starting worker", i)
t = Process(target=worker, args=[tasks, finished_tasks, dl_lock])
t.daemon = True
t.start()
Expand All @@ -100,7 +105,7 @@ def bioportal_benchmark(apikey, output_file, threads):
metadata = Namespace("http://data.bioontology.org/metadata/")
url = 'http://data.bioontology.org/ontologies?apikey=%s' % apikey
ontology_graph = Graph()
print url
print(url)
ontology_list_json = urlopen(url).read()
ontology_graph.parse(StringIO(unicode(ontology_list_json)), format="json-ld")
ontologies = ontology_graph.query(bioportal_query)
Expand All @@ -123,18 +128,18 @@ def worker(q, finished_tasks, dl_lock):
og.load(stats['download_url'] + "?apikey=%s" % apikey)
finally:
dl_lock.release()
print stats['ontology'], stats['id']
print(stats['ontology'], stats['id'])
ig = to_isomorphic(og)
graph_digest = ig.graph_digest(stats)
finished_tasks.put(stats)
except Exception as e:
print 'ERROR', stats['id'], e
print('ERROR', stats['id'], e)
stats['error'] = str(e)
finished_tasks.put(stats)
except Empty:
pass
for i in range(int(threads)):
print "Starting worker", i
print("Starting worker", i)
t = Process(target=worker, args=[tasks, finished_tasks, dl_lock])
t.daemon = True
t.start()
Expand Down
6 changes: 3 additions & 3 deletions examples/prepared_query.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
SPARQL Queries be prepared (i.e parsed and translated to SPARQL algebra)
by the :meth:`rdflib.plugins.sparql.prepareQuery` method.

When executing, variables can be bound with the
When executing, variables can be bound with the
``initBindings`` keyword parameter


Expand All @@ -17,7 +17,7 @@
if __name__=='__main__':

q = prepareQuery(
'SELECT ?s WHERE { ?person foaf:knows ?s .}',
'SELECT ?s WHERE { ?person foaf:knows ?s .}',
initNs = { "foaf": FOAF })

g = rdflib.Graph()
Expand All @@ -26,4 +26,4 @@
tim = rdflib.URIRef("http://www.w3.org/People/Berners-Lee/card#i")

for row in g.query(q, initBindings={'person': tim}):
print row
print(row)
10 changes: 5 additions & 5 deletions examples/rdfa_example.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""

A simple example showing how to process RDFa from the web

"""
Expand All @@ -11,12 +11,12 @@

g.parse('http://www.worldcat.org/title/library-of-babel/oclc/44089369', format='rdfa')

print "Books found:"
print("Books found:")

for row in g.query("""SELECT ?title ?author WHERE {
[ a schema:Book ;
for row in g.query("""SELECT ?title ?author WHERE {
[ a schema:Book ;
schema:author [ rdfs:label ?author ] ;
schema:name ?title ]
FILTER (LANG(?title) = 'en') } """):

print "%s by %s"%(row.title, row.author)
print("%s by %s"%(row.title, row.author))
Loading