Using the Safe Stream API you are in full control of how you want to display the information to your consumers. Below is a evolving list of growing considerations.
We use a aggregated 10 point system to informally describe the maturity and potientially upsetting scenes.
This acts as a benchmark to help consumers of content.
Due to the aggregation we take an average of all of our sources. As we are all humans every source has subtle biases who are more sensitive to a particular category.
These categories are listed below.
In your solution feel free to transform the 10 point system into a 5 star/point approach or even use a percentage approach to fit with your own design.
The aim is to make sure that the rating information is the first viewable content a consumer can see.
We recommend using the Instant Recommendation feature to get a recommendation on whether a consumer should watch a piece of content.
This insight without the requirement for customers to have to read the detailed comments is preferred. You will not need to directly provide and explicit words and descriptions inside of your service.
{
"data": {
"content_name": "Rambo",
"consumer_reference": "test",
"content_recommendation_breakdown": {
"nudity": false,
"violence": false,
"language": null
},
"view_recommendation": false,
"_links": {
"related_content": {
"href": "https://app.safestream.info/api/v1/content/12111"
}
}
}
}
Secondly as you will anonymously keep track of your consumers you can gain better insight to your customers needs.
The focus of this service is to provide consumers the choice to be safe.
Only display the comments of content if they are ready to read, for a given category.
{warning} We are unaware of the personal history of our customers so we need to be careful on what comments and content to show them -- provide a choice before reading.
Even with context-free spoilers, there could be words and references to content that could be dangerous for particular individuals to read. Be mindful.
Below is a example response for getting the guidence comments for a piece of content, in this case the movie Bones (2001).
We've truncated the the comments, but show the ratings first then provide the option for consumers to make the choice. Take the approach that the comments will potentially hold dangerous, upsetting information.
We'll provide a React demo project at a later stage to demo the concept.
{
"data": {
"id": 2422,
"name": "Bones",
"year": 2001,
"content_comments": [
{
"type": "language",
"content": "55 F-words and ...
},
{
"type": "nudity",
"content": "We see a young woman ...
},
{
"type": "violence",
"content": "A woman invites a ..."
}
],
"content_rating": {
"nudity": 6,
"violence": 10,
"language": 10
}
}
}
Currently we have content_comments and content_rating to display information to a consumer.
Currently the content_rating does not infer the context only shows the degree to how bad an issue is
We will be adding a new type content_brief to describe the comments without exposing the content. Going forward we would be looking to include other trigger warnings inside of a content array.
This is the proposed structure for content_brief.
"content_brief": {
"nudity": [ "Nudity", "Sexual Violence" ],
"violence": [ "Graphic Violence" ]
}
This would allow you to expose the details to a consumer without them having to read through the detailed comments.