The UHCL Course Support and Development Team urges
all users of SafeAssign to read the article below, created from
Blackboard. In short, SafeAssign is inherently limited to public data
for comparison — meaning it cannot check any source or anything that has
not previously been submitted using SafeAssign, primarily sources that are
not publicly available. For example, if a printed book, journal, or
original paper was not already submitted via SafeAssign, and those
sources are behind “proprietary authentication mechanisms” (typically
subscription/membership-based sources requiring a user name and password)–it is
impossible for SafeAssign to check against sources, simply because they do not
exist in their submission database.
Read the article posted below for a more in-depth
explanation of SafeAssign’s functionalities.
SafeAssign Returns 0% Match on Documents Which May Contain
Matches
Date Published: Aug 22,2013
Product: Blackboard Learn; Major Release: 9.1;9.0;8.0; Version: 9.1, 9.0, 8.0
Product: Blackboard Learn; Major Release: 9.1;9.0;8.0; Version: 9.1, 9.0, 8.0
INTRODUCTION:
After adding a document into the SafeAssign or direct
Submit options, SafeAssign may return a 0% match even though, when looking for
specific strings on search tools, it immediately returns links with similar
content.
This situation may initially be confusing for Instructors
and System Administrators since there is a general assumption that main search
engines are checked once an article is created.
FUNCTIONALITY:
This is an explanation about how SafeAssign works:
SafeAssign is designed to be a decision-support
mechanism. In other words, it is designed to provide an Instructor with
more information from a wider set of sources in a shorter period of time
than they would be able to achieve on their own. Instructors will have
the best experience with SafeAssign if their expectations are in line with that
goal.
SafeAssign is not infallible. The system can return
both false positives, items that are not plagiarized that
SafeAssign flags as a match, and false negatives, items that may actually match
other text, but SafeAssign does not identify as a match.
False positives can occur because, for example, SafeAssign
makes no distinction between cited or un-cited text. For instance,
appropriately cited material that would not be considered an instance of
plagiarism can still be flagged as a match.
False negatives can come about for a variety of reasons:
- There are sources SafeAssign simply cannot check against such as printed books or journals, original papers that were not previously submitted using SafeAssign, sources behind proprietary authentication mechanisms or not otherwise available publicly.
- To detect matches, SafeAssign uses a proprietary algorithm that weights the potential matches. This helps assure the quality of results and avoid false positives for common phrases, for example. However, this can sometimes mean that matches an Instructor might consider meaningful fall below SafeAssign’s confidence threshold and are not reported by the application. This can be perceived by the Instructor as a false negative.
SafeAssign is not designed to find specific matches, but
rather the best match from any source. Instructors often expect SafeAssign
to match against a particular source, either on the web or a previous
submission or sometimes a print book. Sometimes SafeAssign will help
confirm such suspicions, but it is possible that SafeAssign will return a
different match, or no match at all, because it is checking against sources
different from the Instructor’s expected source or because it is executing the
queries differently than the Instructor intends.
Ultimately, the decision as to whether plagiarism has
occurred must be made by the Instructor. As long as Instructors approach
SafeAssign with the appropriate expectations, as a tool to assist
in decision-making as opposed to a definitive indicator of originality or of
plagiarism, SafeAssign can provide valuable assistance to faculty and Students
in indicating possible cases of plagiarism for further investigation.
SafeAssign does not run a Google search. SafeAssign is
designed to search the following databases:
- Institution database
- Global database
- ProQuest database
- API call to Bing, meaning that searching bing.com likewise does not guarantee a match in SA.
SafeAssign is designed to be a tool to assist in plagiarism
detection. It does not a guarantee 100% accuracy.
Finally, the SafeAssign match percent for a document is not
the amount of the document that is plagiarized, but the percent chance that the
document contains or is plagiarism, based on the search.
Note: This information is being run against 3
different databases that are in a constant state of change as well as a search
to Bing.com through an API call. This actually means that the results for
the same document run at different times can have differing results each time
the check is run against the same document because it is run against
different criteria for each run against SafeAssign.
Every time you reprocess a
report without other assignments the possibility of obtaining 3 separate values
is functioning as designed.