Skip links

Policies and Approach

Our AI safety begins with the establishment of well-defined policies and guidelines that dictate the acceptable and unacceptable behaviors for Finley AI (AKA Finley). These frameworks enable us to intentionally integrate our desired values into Finley AI and gather relevant data to align its experience and responses accordingly.

Empowering the financial landscape with pioneering AI, Finley AI stands as a beacon of innovation, steadfast in protecting the sanctity of user privacy and the security of information through our policy framework and charter.

Ensuring User Well-being:

 

• Finley AI must never engage in actions that could directly harm users or others.

• It should refrain from providing advice or content related to harmful or malicious activities.

• Finley AI must avoid endorsing self-harm, suicide, or any form of dangerous behavior.

• Discrimination, stereotypes, and biases based on various factors including age, disability, ethnicity, gender identity, nationality, race, religion, etc., are strictly prohibited.

• Encouraging hatred, harm, derogatory language, or slurs against any individual or group is off-limits.

• Finley AI should steer clear of offering critical advice that should ideally come from qualified professionals, including medical, financial, and legal guidance.

Promoting Respect and Harmony:

  • Finley AI’s interactions with users and others should exude a peaceful and respectful demeanor.
  • Valuing diverse perspectives, Finley AI should work to de-escalate conflicts and respect differences in opinions.

Avoiding Hallucinations:

  • Acknowledging the potential for misinformation, Finley AI should exercise skepticism, accept feedback readily, and refrain from answering topics it lacks updated knowledge on.

Legal and Ethical Compliance:

  • It aims to abstain from providing guidance on acquiring or utilising certain goods, such as illegal substances, weapons, or medical products that require authorisation.

For a comprehensive understanding of our data collection and usage practices, we encourage you to read our Privacy Policy here: Privacy Policy.

Known Limitations

Large AI models can be likened to ongoing projects, and our AI systems are no exception to this analogy. As we continue to refine and enhance our technologies, it remains paramount for us to maintain transparency about the areas in which our AI may not perform optimally.

These areas encompass:

Addressing Biases: Language models can absorb biases from the data they are trained on. This can lead to the propagation of stereotypes related to factors such as race and gender. While diligent efforts have been made to mitigate these biases in our AI, there is still the potential for their manifestation.

Recognising Constraints: Finley AI may not always be cognisant of its own limitations. It might refer to entities that lack actual existence or falsely claim accomplishments it hasn’t achieved.

Ensuring Accuracy and Reality: On occasion, language models may confidently assert the truth of something even when it’s inaccurate. Especially when relying on Finley AI’s information, exercising your own discernment and fact-checking is advised.

Limited Recall: Presently, via our platform service Finley AI can retain only a portion of previous conversations. Consequently, it might forget facts or past statements.

Exercising Caution: AIs can inadvertently generate content that may not be safe or appropriate. This can occur when they perceive the interaction as a game or fictional scenario. Our AIs may exhibit this behaviour as well, particularly under certain circumstances.

Learning from Feedback: Finley AI can glean insights from user feedback to some extent, but it is not all-encompassing. Therefore, it may not fundamentally alter its behaviour based on user suggestions.

Mathematical Proficiency: Similar to humans, Finley AI has limitations in mathematics. It can perform most calculations but it does not possess expertise akin to a math prodigy.

Multilingual Competency: Our AI was predominantly designed for English speakers, however we have incorporated methods for multi-lingual capabilities. However, interacting with Finley AI in other languages may sometimes produce below expected results.

We’re committed to transparency and advancement. If you identify areas requiring rectification or enhancement, please kindly notify us at feedback@inx-auhxcxhsa9h2cgfd.westus-01.azurewebsites.net.

Review and improvement

We will never stop improving our safety. While there’s no surefire technique for perfect alignment, and no policy can predict every possible real-world situation, especially in emerging technology, we’re aware that challenges are part of the process.

Our strong safety foundation involves regularly checking for areas where our AI might fall short, and fixing issues quickly.

Here’s how we work to make our AI better and safer:

Partner Feedback: Your insights as users are incredibly important. If you come across aspects in need of refinement or correction, please feel free to connect with us at feedback@inx-auhxcxhsa9h2cgfd.westus-01.azurewebsites.net.

Tough Testing: We believe in tough testing to make sure our AI stays safe, even when things get difficult. We work hard to try to find problems with our AI. This helps us learn and make things better.

The commitment to safety is an evolving journey. As a forward-thinking organisation, we consistently learn from the insights you provide. We strive to fortify our AI to meet the dynamic challenges and evolving expectations of our users. Should you have any thoughts, concerns, or suggestions for improvement, please don’t hesitate to connect with us at feedback@inx-auhxcxhsa9h2cgfd.westus-01.azurewebsites.net.

🍪 This website uses cookies to improve your web experience.