31 Aug, 2020 08:27 a.m.

When developing AI Systems we need to let ourselves be guided by our ethics and morals where regulation is not in place, as is often the case in today's environment. User privacy and safety is the main concern and we have the responsibility to make sure that AI systems are secure before bringing them to the marketplace and to the general public. We have always been a strong advocate for user privacy and safety. During the last few years, many of the world's most prominent tech companies have gotten a lot of negative attention in the media for violating user privacy and other issues regarding user safety.

AI is growing and it is rapidly becoming more and more present in our everyday life, a lot of the progress we are making in AI is groundbreaking and since it's all-new, there is little or no legislation in place. Because of the lack of rules and regulations, there is always a risk that a developer of AI may put their ethics aside in favor of something less honorable and more profitable. Don't get us wrong that having AI principles and laying out ethical guidelines are very important and which don't mean much anything if they aren't implemented to safeguard the basic principle.

AI developers who want to produce ethical products need to implement a strong moral code within their workforce and company vision. If an area of AI lacks legislation, it is up to the developer to make sure their products or the development thereof is indeed ethical and fits within the moral code of their company. Some of the companies that we admire and have a set of AI principles is Microsoft. You can read more about it here

Make your AI smarter today

Train models to interpret complex text with annotation templates designed to delve into the contextual nuance of written language.