Responsible AI and IQNECT
IQNECT is developed with a strong commitment to responsible AI practices.
IQNECT leverages OpenAI’s API, which is built and maintained in accordance with industry-leading Responsible AI principles.
IQNECT platform safeguards
IQNOX has implemented several product-level measures to prevent and manage inappropriate AI output:
- Human-in-the-loop validation: AI-generated content is presented for user review prior to being applied to systems or shared externally.
- Role-based access controls (RBAC): Sensitive AI functions (such as system write-backs or integration with production platforms) are restricted to authorized users.
- Context scoping: Prompts and output formatting are tailored to stay within defined domains (e.g., technical documentation, requirements analysis) to minimize the risk of inappropriate or out-of-scope recommendations.
- Audit logging and traceability: All AI-generated content is logged and traceable to ensure accountability and transparency in usage.
Leveraging OpenAI’s safety features
IQNECT uses OpenAI’s API as its AI engine. IQNECT uses OpenAI’s Application Programming Interface (API) to process customer data for activities such as requirements analysis. OpenAI applies numerous safety mitigations at the model and infrastructure level, including:
- Content moderation filters that detect and block harmful, unsafe, or policy-violating content.
- Alignment training that reduces the likelihood of biased, toxic, or misleading responses.
- Preparedness framework to manage existential AI safety risk.
- Usage monitoring for potential misuse or abuse of generative capabilities.
- Compliance with standards like SOC2 and CSA STAR.
These guardrails are continuously updated as model capabilities evolve.
OpenAI does not train its models on IQNOX data processed through the OpenAI API.
Additionally, OpenAI automatically deletes data processed via the API within 30 days (barring a legal hold). OpenAI is contractually bound to protect the confidentiality of customer data provided via the API. OpenAI does this with industry-standard measures such as encrypting all data at rest (using AES-256) and in transit (using TLS 1.2+). It also offers a Bug Bounty Program for responsible disclosure of vulnerabilities discovered on its platform and products.
Finally, OpenAI has undergone a SOC 2 Type II examination of its security controls.
Customer-controlled guardrails
Where applicable, IQNOX allows customers to:
- Configure AI use cases appropriate to their risk profile
- Disable or restrict access to certain models or provider features
- Submit feedback on undesired responses for review and adjustment
IQNOX continually monitors and improves its controls in response to advancements in generative AI and evolving customer requirements.