pubs/tests/fixtures.py
Olivier Mangin 506bb24e50 Many cleanup in unicode encoding.
Originally intended to fix a bug in edit openning fils with non-ascii
characters.

Now all data is assumed to be manipulated as unicode. Therefore all
values returned by functions from content are unicode. There are a few
exception in order to download non-unicode data without failing to
decode. These exception are marked by the 'byte_' prefix.
The io package is used instead of builtin open for all file
transactions.

The fake_env test helper has to be modified (hacked, to be honnest) since
fake_filesystem does not offer a mock of io.

This is still WIP. Two issues still have to be solved:
- first there is an UnicodeWarning raised by bibparser,
- also config is still directly using builtin open.
2014-04-23 21:28:20 +02:00

41 lines
1.1 KiB
Python

# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import dotdot
from pubs import endecoder
import str_fixtures
coder = endecoder.EnDecoder()
franny_bib = """@article{Franny1961,
author = "Salinger, J. D.",
title = "Franny and Zooey",
year = "1961"}
"""
doe_bib = """
@article{Doe2013,
author = "Doe, John",
title = "Nice Title",
year = "2013"}
"""
dummy_metadata = {'docfile': 'docsdir://hop.la', 'tags': set(['a', 'b'])}
franny_bibdata = coder.decode_bibdata(franny_bib)
franny_bibentry = franny_bibdata['Franny1961']
doe_bibdata = coder.decode_bibdata(doe_bib)
doe_bibentry = doe_bibdata['Doe2013']
turing_bibdata = coder.decode_bibdata(str_fixtures.turing_bib)
turing_bibentry = turing_bibdata['turing1950computing']
turing_metadata = coder.decode_metadata(str_fixtures.turing_meta)
page_bibdata = coder.decode_bibdata(str_fixtures.bibtex_raw0)
page_bibentry = page_bibdata['Page99']
page_metadata = coder.decode_metadata(str_fixtures.metadata_raw0)
page_metadata = coder.decode_metadata(str_fixtures.metadata_raw0)