Expert: Catherine Parry, DeepView, Facilitator: Sarah Collinson, AllClear Insurance Services
- The importance of data to financial services and to the effective and trusted implementation of AI in the sector, was acknowledged at a meeting of UK authorities, financial institutions, technology companies and other stakeholders earlier this year
- The increasing reliance on data increases the importance of data quality and the standards that should be adhered to. However, there is currently a lack of consensus on data standards in the financial services sector, including agreement on good practice, and that there may be challenges in applying existing data standards to AI
- It’s clear that the status quo cannot last. Current policies and regulatory frameworks in place do not meet the wide and ever-changing needs of AI.
- At the height of the pandemic, we saw over 50K new posts daily on LinkedIn, the collision of business and social practices arrived with a bang
- Suddenly in the interests of team building and interaction with remote working, teams were encouraged to post pictures of their new workstations blissfully unaware of the data they were sharing in the background
- White-hat hacker Rachel Tobac - read here
Social media has brought huge, unmeasurable benefits to our society as a whole and particularly throughout the pandemic. Consumer demand has pushed the boundaries of personal lines in the digital world.
But how do we evaluate and manage the new risks that arise from so many different types of content in the business environment?
LinkedIn, WhatsApp, and WeChat are excellent tools for engagement and brand building and have vast security infrastructure to protect us from misuse and abusive content amongst many other aspects. MIFID rules and GDPR regulation are very clear for businesses, but what of rules surrounding the use of personal devices?
How do we turn these challenges into opportunities? Are we clear about the rules and parameters?
For example, is a team WhatsApp chat group a potential data leak and could the content be used in an FCA investigation?
We need to trust employees to use the relevant means to get what they need to get done, done. But it is important to have a policy to monitor links relating to work. We have a cultural view that texts are different to WhatsApp or LinkedIn messages, and Memes are another challenge entirely.
We are seeing cases where the lines between personal and business devices are blurred particularly in the heavily regulated financial services environment.
Single group chats among traders, unrecorded or unmonitored can be seized by the FCA as evidence of evidence for insider trading. This joins the line between personal life and regulation.
So, who controls the personal devices when these are also being used for work? Are mutual exchanges relating to business private? And how do we turn these challenges into opportunities?
The cross over between work and home life needs to be clear, complacency relating to responsibility won’t be accepted. Like data rules, compensation for breaches is a likely outcome in the future. In a shared data environment, it would be naïve to expect different rules.
Should we be seeking to have all work interactions on secure environments giving us a clear audit trail?
Taking ownership upfront and early, with education running through induction and ongoing training courses shifts the responsibility to both individuals and businesses. Likewise clear rules and actions relating to personal devises and use of social media will mitigate risks, i.e., making clear what is and is not expected and what could happen IF you get it wrong. Businesses must take responsibility for a duty of care to employees educating them about potential disclosure requirements in the event of an investigation.
There needs to be a shift from the lockdown approach many major organisations take banning all social media, pushing their employees to personal devices for client interaction at the client’s request. Perhaps adopting an approach with clear rules that safeguards both employers and employees is the future?
Many Financial Advisors are choosing to record client interactions via social media e.g. WhatsApp messages or video chat to ensure clarity and a proactive approach to defence vs investigation.
Conduct and culture, training and trust
Conduct failures come back to culture. Impactful education outlining that defined interactions must always be conducted in a certain way i.e., the business rules.
These considerations are not new, the same questions on the responsibilities of the individual vs the company were raised with the advent of faxes, text messages and so on, execution is the question. Employees are human and therefore make mistakes.
We need to move governance from box ticking to an interactive approach with impactful education providing protection from risk.
Do we have adequate social media policies? Do any of us have corporate policies relating to Slack or Zoom notifications? Without proper internal processes who holds the liability for breaches?
Social media in all its forms is progress and here to stay but the ability to use it safely in the workplace can be achieved with clear education and knowledge. It is not about abdicating responsibility, rather enabling employees to use these tools in a compliant way.
Products such as DeepView have an AI algorithm which recognise different components of video or photo’s e.g., an Instagram review photo which is tagged to an organisation allowing businesses to track and assess risk. These options are increasingly important with the UK’s spike in video chat.
This is not so much 'big brother' as monitoring where breaches may take place enabling businesses to act via internal process and protocol. We all have these protocols in place for potential viruses or phishing campaigns so why not our social media?
What does good look like?
One thing is certain, the pace of change won’t slow down.
As Mark Zuckerberg raised in his email to staff following their service outage on 5th October, the pace of change is even outpacing him:
Hey everyone: it's been quite a week, and I wanted to share some thoughts with all of you.
First, the SEV that took down all our services yesterday was the worst outage we've had in years. We've spent the past 24 hours debriefing how we can strengthen our systems against this kind of failure. This was also a reminder of how much our work matters to people. The deeper concern with an outage like this isn't how many people switch to competitive services or how much money we lose, but what it means for the people who rely on our services to communicate with loved ones, run their businesses, or support their communities.
I know it's frustrating to see the good work we do get mischaracterized…. But I believe that over the long term if we keep trying to do what's right and delivering experiences that improve people's lives, it will be better for our community and our business.
When I reflect on our work, I think about the real impact we have on the world -- the people who can now stay in touch with their loved ones, create opportunities to support themselves, and find community. This is why billions of people love our products. I'm proud of everything we do to keep building the best social products in the world and grateful to all of you for the work you do here every day.
"For us, as long as data is collected in the right way, AI Alert solutions running in parallel with existing infrastructure with the ability to recognise potentially damaging components of pictures, video’s as well as content and alert us, will become accepted hygiene factors to keep us safe through all our working interactions".