There are a few aspects of privacy: i) queries sent to external AI tools for analyzing scanned documents, ii) queries to external AI tools for document generation, and iii) data used to train these AI tools. All our queries to external platforms use industry standard encryption to protect data in transit. Our queries are anonymized so that no personally identifying information leaves the SuperVisas platform. Similarly, all our datasets used to train and fine-tune AI tools are anonymized. However, data extraction from an applicant uploaded, scanned document requires the scanned document to leave our servers. This feature is still under development and we are evaluating locally hosted and private cloud solutions. We only work with reputable companies that are compliant contractually with privacy laws. Regulations in this area are still evolving. Privacy is important to us, and we will continue to ensure we meet industry standards.
There are a few aspects of privacy: i) queries sent to external AI tools for analyzing scanned documents, ii) queries to external AI tools for document generation, and iii) data used to train these AI tools. All our queries to external platforms use industry standard encryption to protect data in transit. Our queries are anonymized so that no personally identifying information leaves the SuperVisas platform. Similarly, all our datasets used to train and fine-tune AI tools are anonymized. However, data extraction from an applicant uploaded, scanned document requires the scanned document to leave our servers. This feature is still under development and we are evaluating locally hosted and private cloud solutions. We only work with reputable companies that are compliant contractually with privacy laws. Regulations in this area are still evolving. Privacy is important to us, and we will continue to ensure we meet industry standards.
There are a few aspects of privacy: i) queries sent to external AI tools for analyzing scanned documents, ii) queries to external AI tools for document generation, and iii) data used to train these AI tools. All our queries to external platforms use industry standard encryption to protect data in transit. Our queries are anonymized so that no personally identifying information leaves the SuperVisas platform. Similarly, all our datasets used to train and fine-tune AI tools are anonymized. However, data extraction from an applicant uploaded, scanned document requires the scanned document to leave our servers. This feature is still under development and we are evaluating locally hosted and private cloud solutions. We only work with reputable companies that are compliant contractually with privacy laws. Regulations in this area are still evolving. Privacy is important to us, and we will continue to ensure we meet industry standards.
Accuracy and Hallucinations
Preventing hallucinations and ensuring accuracy are primarily based on the context provided to the AI. The latest versions of GPT have greatly improved the prompt mechanisms, which has improved the ability to set the context for the query. Fine-tuning of models also can help with this, but more progress is needed in this area. Our current document generation deployments still produces some mistakes - although it keeps getting better! For this reason the applicant or reviewer, as the case may be, is still in the loop. They can review and edit all outputs. As we develop the Immigration ChatBot, hallucinations are more of a concern because we have less control over the input (i.e. applicant’s questions). We are monitoring these results closely.
Preventing hallucinations and ensuring accuracy are primarily based on the context provided to the AI. The latest versions of GPT have greatly improved the prompt mechanisms, which has improved the ability to set the context for the query. Fine-tuning of models also can help with this, but more progress is needed in this area. Our current document generation deployments still produces some mistakes - although it keeps getting better! For this reason the applicant or reviewer, as the case may be, is still in the loop. They can review and edit all outputs. As we develop the Immigration ChatBot, hallucinations are more of a concern because we have less control over the input (i.e. applicant’s questions). We are monitoring these results closely.
Preventing hallucinations and ensuring accuracy are primarily based on the context provided to the AI. The latest versions of GPT have greatly improved the prompt mechanisms, which has improved the ability to set the context for the query. Fine-tuning of models also can help with this, but more progress is needed in this area. Our current document generation deployments still produces some mistakes - although it keeps getting better! For this reason the applicant or reviewer, as the case may be, is still in the loop. They can review and edit all outputs. As we develop the Immigration ChatBot, hallucinations are more of a concern because we have less control over the input (i.e. applicant’s questions). We are monitoring these results closely.
Application Assessment is where we have the greatest concern around bias and discrimination. This will remain an internal tool until we can verify it does not discriminate. LLMs could very likely show bias against immigrants, specific languages and specific countries. For document generation and scanning we have not seen any issues and are less concerned. The context and queries are very specific, which we believe limits bias or discrimination.
Application Assessment is where we have the greatest concern around bias and discrimination. This will remain an internal tool until we can verify it does not discriminate. LLMs could very likely show bias against immigrants, specific languages and specific countries. For document generation and scanning we have not seen any issues and are less concerned. The context and queries are very specific, which we believe limits bias or discrimination.
Application Assessment is where we have the greatest concern around bias and discrimination. This will remain an internal tool until we can verify it does not discriminate. LLMs could very likely show bias against immigrants, specific languages and specific countries. For document generation and scanning we have not seen any issues and are less concerned. The context and queries are very specific, which we believe limits bias or discrimination.
We will always have a human in the loop, or an escalation pathway to a human. Our immigration experts (lawyers in the USA and RCICs in Canada) review all documents and applications. Their feedback is used to catch mistakes and improve the product. We don’t imagine ever removing these positions, just making them more efficient and free of form-filling drudgery. We don’t see AI reducing the amount of work for humans, but rather changing the unit economics and broadening the addressable market. AI is a co-pilot that will improve and expand our industry.
We will always have a human in the loop, or an escalation pathway to a human. Our immigration experts (lawyers in the USA and RCICs in Canada) review all documents and applications. Their feedback is used to catch mistakes and improve the product. We don’t imagine ever removing these positions, just making them more efficient and free of form-filling drudgery. We don’t see AI reducing the amount of work for humans, but rather changing the unit economics and broadening the addressable market. AI is a co-pilot that will improve and expand our industry.
We will always have a human in the loop, or an escalation pathway to a human. Our immigration experts (lawyers in the USA and RCICs in Canada) review all documents and applications. Their feedback is used to catch mistakes and improve the product. We don’t imagine ever removing these positions, just making them more efficient and free of form-filling drudgery. We don’t see AI reducing the amount of work for humans, but rather changing the unit economics and broadening the addressable market. AI is a co-pilot that will improve and expand our industry.