Katie Han Thesis Final (PDF)




File information


Title: Microsoft Word - THESISDRAFT-3.docx

This PDF 1.3 document has been generated by Word / Mac OS X 10.12.3 Quartz PDFContext, and has been sent on pdf-archive.com on 12/11/2018 at 03:33, from IP address 96.76.x.x. The current document download page has been viewed 416 times.
File size: 1.67 MB (13 pages).
Privacy: public file
















File preview


Enhancing Content Selection and Extraction
Mechanisms in Web Browsers and PDF
Senior Honors Thesis by Katie Han
Advisor: Andries van Dam
Second Reader: James Tompkin
Abstract
A common task during research or document organization is selecting fragments of content
from the web or PDF documents and migrating the information to different environments, such
as note-taking applications and word processors. However, the native selection and extraction
capabilities of standard web browsers and PDF viewers offer little help in scraping resources in
this manner. For web pages, the underlying HTML structure that represents images, lists,
tables, links, and other formatting is often lost in the process; for PDF documents, selecting and
copying any content other than basic text is virtually impossible. In this project, I explore and
extend the work done by cTed, a web browser plug-in that allows intuitive gestures to select
content on websites with arbitrary layouts. Then, I move on to applying those selection
mechanisms to PDF documents and implementing a software that enables extracting excerpts
from documents with little loss of the embedded information.

Introduction
In recent years, the main channel for people to consume and share information has shifted
towards the web. Whether it is morning news, research articles, or interesting blog posts,
information is most often accessed through web pages displayed on browsers. When absorbing
these materials, people often intend the scrap excerpts for the purpose of sharing with others or
reorganizing the information in a different environment, such as Microsoft Word or OneNote.
However, to select specific fragments of web pages and export the clipped content is generally
unintuitive and difficult. For instance, web browser’s basic click-and-drag functionality provides
limited selection and is insufficient in reflecting what users want to capture from the page. On
the other hand, a device’s screenshot tool simply grabs a bitmap representation of the selected
text, images, tables, or videos instead of conserving the important underlying HTML elements.
While this method allows the user to share a visual representation of the page, other resources
behind the content, such as text and links, are lost.
Apart from HTML web pages, information is frequently shared and displayed via PDF
documents, especially in the context of research and scholarship. Similar problems exist with
PDF file formats when selecting and extracting of excerpts from a document. Current PDF
viewer applications provide limited interactions between the user and the underlying content of
the PDF. While many applications focus on displaying the pages of a PDF document and editing

the document on the page level, little work has been done to allow rearrangement or
manipulation of smaller sections of a page. Attempts to do so using the selection functionality of
existing PDF viewers introduce users to inevitable obstacles. For example, selecting a text
region requires clicking with a precise starting location; otherwise, it will not recognize the
gesture at all. A solution to this obstacle would be a software that intakes given input points and
generate a logical selection for the page from that data. In addition, information workers may
need to extract and organize the information in a separate application or medium. A standard
scenario I consider in depth is copy-and-pasting a selected portion of a PDF document into a
note-taking platform, such as Microsoft Word or OneNote, perhaps to create a summary of the
document. Without the aid of advanced and costly PDF editors, simply copying or extracting an
image from a PDF document is essentially impossible.
During this project, I started off by delving into the work done by cTed [1], a plug-in for Google
Chrome that originates from fellow researchers at Brown University. cTed focuses on enhancing
user interaction with the information presented on a web page by allowing natural selection
gestures for elements in the Document Object Model (DOM). I identified existing problems with
the current implementation and came up with solutions to amend those flaws.
Using the knowledge gained from studying cTed, I moved on to develop a similar tool for PDF
documents, another common source of information. I created a dynamic-link library (DLL) that
provides functionalities of intuitive content selections, building on an open-source software
library called MuPDF [2] to interact with the underlying structure of PDF documents. The
standalone library can easily be integrated into other software applications, such as NuSys. In
my project, the DLL is accompanied by SelectPDF, a Universal Windows Platform (UWP)
application for advanced PDF selection and extraction. To facilitate the process of migrating
information from SelectPDF to a separate platform such as a word processor, the program
immediately makes the selected content available in the clipboard of the OS and thus allows it
to be read by any outside application that supports importing via the clipboard. In this paper, I
present the challenges and outcomes of the process, along with the implementation details.

Motivation
NuSys
NuSys, a long-term project by the Graphics Group at Brown University, is a platform for
supporting individual and small group knowledge work. It is a collaboration tool for gathering,
exploring, organizing, and presenting multimedia information that focuses on the use of pen and
touch interaction. Users can create private or shared unbounded 2D workspaces for
synchronous or asynchronous collaboration.
Presently, users can upload various content to NuSys’s library manually, but there exists no
easy way for them to import web content to the system directly. In addition, viewing and
annotating PDF documents play a central role in document organization within NuSys. My
project primarily focuses on complementing NuSys’s features for these tasks. Both applications,

cTed and SelectPDF, aim to be integrated with NuSys to allow a more fluid experience when
importing web content or working with regions of PDF documents.
User Study
To evaluate which selection gestures are in
fact most useful and natural for users, I
conducted a preliminary user study. I
designed three portions for participants to
complete. First, participants were asked to fill
out a preliminary online survey about their
usual channels and mechanisms of clipping
and sharing information found on the web.
Then, I conducted a pen-and-paper activity
of “selecting” or marking elements of printed Figure 1. An example of a lasso shape drawn with pen and
paper when a participant was asked to select the
web pages and documents to observe the
bottommost paragraph and the image to the right.
most intuitive actions for the participants.
Lastly, I asked a few questions verbally to bring out ideas on the kinds of interactions and
features a program such as cTed and SelectPDF should exhibit.
The survey indicated that participants, who are mostly college students, share content from the
Internet more than 2 times a day on average, mainly through the means of copy-and-paste and
device screenshot. Moreover, only two out of ten participants use a specific app or browser
plugin to capture and share information from the web, indicating that currently, software such as
cTed is not widely used. However, when asked at the end of the user study if they have ever
encountered a situation where they had trouble selecting specific content from the web, seven
out of ten participants answered “yes” and provided examples of such situations. One
participant quoted “some websites mess up selections because of ads on the side,” while
another described the difficulties she faced when trying to copy and paste a data table from the
web to her lab’s group message. Many also mentioned that copying and pasting snippets of
information is essential when writing a research paper for a class.

Figure 2. An example of color
categorization on a page of images.

From the pen-and-paper activity, I could see interesting
trends in how people typically mark up a specific part of a
page. In addition to reiterating the general use of lines on
text elements, brackets drawn on the left side of target
regions, and marquee shapes that indicate a rectangular
area, the study showed common usage of lasso gesture.
Figure 1 illustrates an example of the use of convex shapes
to mark multiple elements on a page. Meanwhile, people
frequently used different colors to mark regions, especially
image elements, as shown in Figure 2.

Related Work
Many browser extensions and tools have emerged in the past to allow users to annotate, clip,
and save content from the web. Some examples include Diigo [3], Hypothesis [4], Evernote
Web Clipper [5], and SurfMark [6], all of which serve as bookmarking tools and focus on
facilitating education and research. In particular, many of these platforms provide the ability to
highlight and annotate web pages, followed by a way to organize and share the clipped
resources. Pinterest’s browser button [7] encourages users to clip web content and share them
with the Pinterest community in a social context. However, all of these extensions rely on the
web browser’s basic selection capabilities to extract information or directly save the entire URL
to a web page instead of arranging more intuitive interactions for the users. On the other hand,
Microsoft Edge [8] contains a built-in feature for drawing and writing notes on the web page,
catered towards consumers of pen and touch screen devices. However, this functionality simply
grabs a screenshot of the selection and does not keep any references to the original HTML
structure of the web page.
PDF documents are also often opened and viewed on standard web browsers, facing the same
shortcomings when a user attempts to select text, tables, or images in the document.
Alternatively, other standalone software provides more flexibility in the tasks that can be carried
out on PDF documents. For instance, Adobe Acrobat DC [9] offers a variety of features,
including editing, signing, converting, and sharing documents. However, fine-grained selections
based on natural gestures, especially those from pen and touch interactions, are not optimized.
Extraction of content occurs mostly on the grid level or by clicking and dragging lines of text.
Several applications cater more to highlighting, annotating, and even drawing on PDF
documents—namely Drawboard PDF [10] on Windows and Preview [11] on Mac OS. Yet, little
work has been done to support the workflow of selecting specific regions of a document
intuitively and extracting the information from those regions immediately.

Selections
Before the start of this project, cTed presented a proof of concept for three intuitive selection
gestures: line, bracket, and marquee. Line selection, the simplest gesture of all, finds the
underlying elements in a straight line from the start point to the end point. Bracket gesture is a
vertical line along the left side of the targeted fragment, much like how one would mark a
paragraph on paper with pen. The software intelligently identifies the elements that lie within the
given “paragraph” and capture the section that the user intended to select. Lastly, marquee
selection allows the users to draw an arbitrary rectangular box by dragging diagonally from top
left to bottom right. All elements within the bounds are selected.
Lasso is another selection gesture that has been explored in the past by the cTed team.
Commonly used in pen and touch devices, the lasso mechanism allows users to draw freehand
selection borders to extract elements from. Based on the findings of my user study, I decided to
pursue the implementation of a lasso selection in addition to the three existing selection
gestures. Below I illustrate in depth how these four selections were implemented and modified

in cTed and SelectPDF.

cTed
I now describe how the four gestural
selections can be implemented as a web
browser extension, using JavaScript and
the DOM API. Because HTML elements
are organized in a nested structure, the
parent-child relationships can be
Figure 3. An example HTML structure with parent-child
exploited to gain access to different
relationships between elements.
regions of the page. cTed, which is a
plug-in for Google Chrome, enhances the web browser’s functionality by injecting JavaScript
code into any website that the user visits. When selecting content from the web page, it ensures
that the underlying structure and resources, such as links, formatting, and tables, are conserved
by keeping track of the HTML elements that represent the selected region. Below, I give an
overview of the implementation details and the modifications I made to the existing code base
for cTed, along with the results of my program in comparison to the native selection behavior in
Google Chrome web browser.
In order to allow advanced selection gestures, a fixed-positioned, transparent canvas element is
appended to the DOM when a web page is first loaded, detecting the input stroke of the user. All
of the selections are based on two important calls: Document.elementFromPoint and
Range.getClientRects. The first function returns the HTML element given an (x, y)
coordinate of the document, and the second function returns a list of bounding boxes for the
given element. More details on how these methods are used to implement each selection are
outlined below.
Line Selection
A line selection allows a quick gesture of selecting a line of text. Much like the traditional
selection mechanism, the start and end points of the cursor or the pointer define the region to
be selected on a horizontal line. The leftmost point of the user’s input stroke maps to the starting
point, and the rightmost point maps to the ending point. In order to find all of the underlying
HTML elements for this area, the algorithm first finds the two elements that lie under the starting
point and the ending point using Document.elementFromPoint, then acquires the first shared
parent of those two elements. Lastly, the child nodes of that parent are traversed, and any
element that overlaps with the input line region is marked and extracted. To determine whether
the current element is included in the selection area or not, the coordinates of the bounding
boxes for the element returned by a call to Range.getClientRects are compared to the input
stroke coordinates accordingly. The core concepts used in this basic selection are applicable to
the other three selection mechanisms, as explained below. From the existing implementation of
this selection in cTed, I removed outdated or duplicate code snippets to make the process more
efficient. For instance, unnecessary calculation of element widths and heights was replaced by

simply accessing the proper fields of ClientRect objects.
Bracket Selection
A bracket selection is marked by drawing a vertical line along the left side of the region that the
user intends to select. The tool can be used on paragraphs, images, lists, or tables. In order to
determine which elements in the DOM are desired, a rectangular area defined by the bracket
line on the left and the right edge of the web browser’s viewport is considered. With a step size
of 20 pixels, the sample points span from the uppermost point to the bottommost point of the
vertical line.
I found that bracket selection was the least
reliable out of the three existing selection tools
in cTed. Originally, the algorithm sampled
points from the bracketed area and assigned a
weighted score based on their location, which
resulted in various bugs, such as selecting
every other line for certain list elements.
Instead, I decided to come up with a revised
algorithm that utilizes the “parents” property of
DOM elements, a concept proved to be useful
in generating selections for line and marquee.
The following logic provides a better
alternative:

Figure 4. An example of bracket selection on a list. The
leftmost image shows in the input stroke, the middle image
shows a buggy version that previously existed in cTed, and
the rightmost image shows the updated version with my
new algorithm.

1. For every “row” of the bracketed area:
a. Sample points until the right edge of the page.
b. For each sample point, find the first common parent with the
element at the row’s leftmost point.
c. Keep the common parents in a map with the number of times they
appear as the common parent in the row.
2. Select the common parent with the highest count from each row.
3. Remove duplicates and select all in the final list of selected
elements.
In the above description, the “rows” refer to the horizontal sets of sample points at every step
size along the y-direction within the rectangular region. As with before, the element at each
sample point is obtained using the Document.elementFromPoint method. Because my
algorithm takes into account the ancestral and sibling relationships between the elements along
horizontal lines, the most logical candidates that contain those elements can be chosen. By
scoring these selection candidates by the number of times they appear as a common parent
with the leftmost element, I ensure that the elements are weighted properly from left to right.
Lastly, since each row computes the selection candidates independently, the bracket selection
gesture can be used on multiple consecutive elements vertically. Overall, my new algorithm
improved the accuracy of the bracket selection tool.

Marquee Selection
A marquee selection is defined by a straight line that runs diagonally from the top-left corner to
the bottom-right corner. The rectangular area delineated by these parameters marks the region
to be selected on the page. Similar to a line selection, the elements that lie underneath the
boxed area are fetched using Document.elementFromPoint. Starting from the top-left point,
DOM elements are traversed both horizontally and vertically, keeping a list of the elements that
lie inside the given box. Then, the plug-in searches for the first common ancestor that
encompasses all of the elements in the list. Lastly, all children of the parent are traversed and
checked if they lie within the selection boundaries using a call to Range.getClientRects. The
HTML elements that meet the criteria are highlighted and extracted.

Figure 5. A display of the capabilities of marquee selection. If a user intends to select only the middle column of
the article list layout, native browser selection returns all columns. On the other hand, given the same starting
and ending points, marquee selection returns the desired region only.

Based on the original codebase of cTed, I fixed various bugs that had resulted in overlooking
some the child elements inside the selection region. Figure 6 illustrates an example of such a
case. Furthermore, based on the findings of
my user study, I came up with a more natural
way to mark an image as “selected.” Instead
of creating a label at the corner of an image,
I highlight the surrounding edges of the
image, making use of color recognition.
Lastly, since marquee selection’s algorithm
displayed striking similarities with that of line
selection, I refactored the code so that
6. A comparison showing the improvements made
common functionalities were shared by calls Figure
by my changes. The leftmost image is the input marquee,
to the appropriate methods.
the middle image is the original cTed version's results, and
the rightmost image is my extended version's result.

Lasso Selection
The lasso selection tool for cTed was implemented from scratch during this project. The gesture
selects elements based on an arbitrary boundary drawn by the user. I tried out a couple of
different approaches, one of which attempted to break down the given area into smaller
rectangles and subsequently apply the marquee selection on those regions. However, I decided
that it is most reliable to build my algorithm on top of a standard point-in-polygon test, as the
input list of points would always define a polygon.

The first challenge is to recognize a lasso gesture when a user inputs a stroke. The input stroke
on our transparent canvas does not have any native modes, meaning it needs to be categorized
based on the list of input points. Given such a list, cTed calculates the stroke metric, which
represents the error rate between the 2D vectors formed by each consecutive pair of points. If
this value exceeds the threshold, it is safely concluded that the input stroke is not a straight line,
and a lasso selection is applied. Afterwards, the list of points is condensed by sampling points
only when there is a significant change in the path they represent. This step aids the point-inpolygon test that will be carried out during the selection algorithm.

Figure 7. A diagram illustrating the
concept behind ray-casting algorithm.

I chose to use the ray-casting algorithm to determine if an
(x, y) coordinate lies inside the input lasso shape. During
this process, I create a straight line, or ray, that starts from
the given point and extends to infinity in the x-direction,
aiming directly to the right. I then calculate how many times
this line intersects with the line segments on the perimeter of
the polygon. If the count is odd-numbered, it indicates that the
point lies inside the polygon; if even, the point lies outside the
polygon. I created a new util class named Polygon that
implements this logic.

Together with the point-in-polygon test, another major component of
the lasso selection is to find the overlapping area between a DOM
element’s bounding box, represented by a rectangle, and the user
input lasso, represented by a polygon. By considering how much of the
element lies inside the lasso shape, I can judge whether an element
should be included in the selection or not. I approached this problem
by first finding intersection points between the line segments of the
polygon and the line segments of the rectangle. Together with any
other polygon points that lie inside the rectangle and any other
rectangle vertices that lie inside the polygon, these points can be used
to calculate the desired area, as shown in Figure 8. Again, the
rectangular bounding boxes of the HTML elements are obtained
through the Range.getClientRect functionality.

Figure 8. A diagram
illustrating how the
overlapping area of a
rectangle and a polygon is
calculated.

Putting together the components discussed above, I present a summary of my lasso selection
algorithm:
1. Create a rectangular bounding box around the given polygon.
2. For each sample point along the “row” and “column” of the box:
a. If the point lies inside the polygon (ray-casting algorithm):
i.
Select element if more than 60% of its area overlaps
with the polygon.
ii.
Repeat step (i) for parent elements until the
criteria is false.
3. Remove duplicates and select all in the list of elements.

The “rows” and “columns” of the rectangular area depend on a step size of 20 pixels, as they did
in bracket selection. The threshold for the overlapping area, 60%, was derived through
experimentation, and the low number seems to represent the lack of precision when users draw
a lasso shape on the screen.

Figure 9. An example usage of lasso selection on the web browser.

SelectPDF
For the second half of my project, I focused on implementing SelectPDF, a program that allows
the four intuitive and free-from selection mechanisms on PDF documents. The program consists
of two parts: a dynamic-link library written in C++ and a Universal Windows Platform app written
in C#. The library is responsible for understanding the selections and supplying direct access to
the selected contents, including images, text, and tables, without the need for a bitmap screengrab of the document page. Other applications, such as NuSys, can utilize this library to embed
the selection functionalities into their own environments. The user-facing UWP portion that I
developed for SelectPDF is an example of integrating the DLL into a separate application. The
platform provides a simple user interface for displaying a PDF document and coordinating the
four selection gestures. Once a selection is generated and highlighted, the contents of that
selection are immediately copied to the clipboard. At this point, the user can move to another
platform, such as Microsoft Word, and paste the contents of the selection. The clipboard serves
as an interface between SelectPDF and any other domain when extracting and sharing
information from a PDF document. Figure 10 below shows an example of the ability to copy and
paste both text and image elements from PDF documents using SelectPDF.

Figure 10. A comparison between the native copy and paste functionality and SelectPDF’s copy to clipboard
functionality on a region of a PDF document shown on the left. The top right image shows the pasted result of the
former and the bottom right image shows the result of the latter.

The logic behind each of the selections are the same as in cTed, but the overall structure of the
program varies. In this section, I give an overview of the SelectPDF program and highlight the
differences between the implementations of cTed and SelectPDF.
My library utilizes the open-source library MuPDF to interact with the underlying architecture of
PDF documents. The MuPDF library exposes both text and image elements from a document’s
page as blocks: fz_stext_block and fz_image_block. For text blocks, further structure exists
with fz_stext_line representing lines and fz_stext_span representing characters. The span
object holds information about the column number it is located in, along with other useful data,
allowing maintenance of the formatting of a table when such elements are selected. In addition,
each of the image and text objects contain information about their bounding boxes, playing a
similar role as the coordinates gathered from Range.getClientRects in cTed. They are central
to interpreting a selection in the context of a document.
On the other hand, unlike the DOM, building blocks of PDF documents are not aware of the
relationship between each of the objects. As a result, when I traverse through a given page, I
simply iterate through a list of image or text block elements that is not necessarily in the order of
appearance, although the exact location of each object on the page is known. The situation
reveals two main problems: 1) the underlying
data of selection contents do not exist under a
unified structure, as HTML elements do, and
2) they are not presented in order. To tackle
the first inconsistency, I created a selection
content struct that handles two types: image
and text. My DLL outputs a list of these structs
for a single instance of a selection mechanism.
Each element in the list can represent a text
block or an image that was included in the
selection region, but the arrangement of these
Figure 11. A high-level class diagram of SelectPDF’s
selection contents on the original document
dynamic-link library.
page may not be reflected.
As Figure 11 suggests, the library’s MuPdfApi class offers the end points for my UWP program
to call, while the PdfDocument class encloses the main functionalities for the document. The
frontend program interacts with the DLL by simply sending over a list of input points in a call to
AddSelection, which generates the appropriate selection in the backend. The frontend is not
aware of the type of selection that is performed. Instead, it can then ask the library for the list of
rectangles to highlight by calling GetHighlights and retrieve the underlying contents of a
selections by calling GetSelectionContents with the index of a selection. Despite this extra
step of communication, the low-level implementation of the DLL allowed SelectPDF’s selection
gestures to be performant.

Line Selection
When generating a basic line selection, I traverse through all of the text spans and image blocks
within the current page and compare the bounding boxes with the given line area. If the span or
block intersects with the target region, the element is selected and extracted as a selection
content. For character objects, a temporary array of characters is used to keep track of the
entire text selection. The text is added to the list of selection contents when I encounter a
different type of element, i.e. an image, or when I finish processing the page.
Bracket Selection
The concept of text lines in MuPDF makes bracket selection much simpler than in cTed. As I
traverse through all of the objects on the given page, I check for the bounding box of the very
first span of each line. If this particular rectangle intersects with the input vertical line area, the
entirety of the line is selected and extracted. Likewise, if an image block’s bounding box
intersects with the bracket stroke, the image is added to the selection contents. This process
ensures that multiple paragraphs or images can be selected along a single vertical line.

Figure 12. An example of selecting a region of PDF documents on a web browser and on SelectPDF. Given the
same starting and end points, as marked by orange in the screenshots, the native selection gives unintuitive results
while my program logically triggers a marquee selection.

Marquee Selection
For a marquee selection, I use a similar logic to line selections. For every given block or span
on a page, I test whether the bounding box intersects or lies inside the rectangular region of
marquee gesture. In contrast to cTed, the bounding boxes for each of the traversed elements
are at the character level, with the exception of images, meaning they occupy a very small
space in the context of the entire text block. Therefore, it is safe to select the whole element
even if it is not completely bound by the selection boundaries, as further comparison of parent
or child elements are not necessary nor possible.
Lasso Selection
To execute a lasso selection, a lot of work is done outside of the actual LassoSelection class,
mirroring the process described in cTed’s Lasso Selection section. However, because of the

lack of ancestral relationship in PDF elements, I am able to directly perform the point-in-polygon
test on the bounding box coordinates of every element in traversal. In addition, as discussed
above in the Marquee Selection section, the small size of the character object meant that I do
not need to calculate the overlapping area between the bounding boxes and the polygon. If at
least one of the rectangle points lies inside the input lasso polygon, I conclude that the rectangle
overlaps with the lasso shape and add the element to the selection. The ray-casting algorithm of
the point-in-polygon test, described above, is implemented in the util class of MuPolygon.

Discussion and Future Work
I showed my contributions in exploring natural selection gestures for users when interacting with
information on web browsers and PDF documents. I came up with significant improvements to
cTed and applied the algorithms to the structure of PDF documents through SelectPDF. During
this process, I placed an emphasis on the new lasso selection tool, particularly useful for pen
and touch devices. Both applications logically detect the target region to be selected within an
arbitrary layout of information and successfully extract the underlying resources to allow further
manipulation of the material by the user. SelectPDF demonstrates a solution to the obstacles
faced by information workers trying to copy and paste excerpts from PDF documents to a
separate ecosystem, such as a Word document, by facilitating an easier workflow for selecting
and extracting document contents. By recognizing several different selection gestures and
automatically copying the underlying contents of a selection into the clipboard, SelectPDF
supports both intuitive selection mechanisms and effortless interactions with other applications.
In the next steps, both cTed and SelectPDF will be integrated with NuSys. Selected contents
from websites can be imported into NuSys as HTML nodes, while PDF nodes in NuSys can
natively provide functionality for the four selection gestures using the dynamic-link library. In
addition, further work can be done with SelectPDF, as the current implementation does not
support conservation of fonts, formatting styles, and links. The building blocks for these features
are offered by MuPDF as fz_font, fz_style, and fz_link. Lastly, additional studies on user
interactions in the context of these applications, such as scribbling gestures to remove existing
selections, could bring insight into potential improvements in the future.

Acknowledgements
All screenshots for web browser selections were taken on Google Chrome and Microsoft Edge.
Web contents originate from www.wikipedia.org and www.nytimes.com. Screenshots for
SelectPDF examples use Emanuel Zgraggen’s paper on Tableur. I would like to thank Philipp
Eichmann for his incredible guidance and valuable feedback throughout my thesis project.

References
[1] Philipp Eichmann, Hyun Chang Song, and Emanuel Zgraggen. 2016. cTed: Advancing
Selection Mechanisms in Web Browsers.
[2] 2017. MuPDF. https://mupdf.com. (2017).
[3] 2017. Diigo Web Collector. https://www.diigo.com/tools/chrome_extension. (2017).
Accessed: 2017-04-09.
[4] 2017. Hypothesis.io. https://hypothes.is/. (2017). Accessed: 2017-04-09.
[5] 2017. Evernote Web Clipper. https://evernote.com/webclipper/. (2017). Accessed: 2017-0409.
[6] 2017. SurfMark. http://www.surfmark.com/home/indexedu. (2017). Accessed: 2017-04-09.
[7] 2017. Pinterest. https://about.pinterest.com/en/browser-button. (2017). Accessed: 2017-0409.
[8] 2017. Microsoft Edge. http://windows.microsoft.com/en-us/windows-10/getstarted-write-onthe-web/. (2017). Accessed: 2017-04-09.
[9] 2017. Adobe Acrobat DC. https://acrobat.adobe.com/us/en/. (2017). Accessed: 2017-04-14.
[10] 2017. Drawboard PDF. https://www.drawboard.com/pdf/. (2017). Accessed: 2017-04-14.
[11] 2017. Apple Inc. https://support.apple.com/kb/PH20218?locale=en_US. Accessed: 201704-14.






Download Katie Han Thesis Final



Katie_Han_Thesis_Final.pdf (PDF, 1.67 MB)


Download PDF







Share this file on social networks



     





Link to this page



Permanent link

Use the permanent link to the download page to share your document on Facebook, Twitter, LinkedIn, or directly with a contact by e-Mail, Messenger, Whatsapp, Line..




Short link

Use the short link to share your document on Twitter or by text message (SMS)




HTML Code

Copy the following HTML code to share your document on a Website or Blog




QR Code to this page


QR Code link to PDF file Katie_Han_Thesis_Final.pdf






This file has been shared publicly by a user of PDF Archive.
Document ID: 0001900424.
Report illicit content