In attendance: Augustina, Chelsi, Jasmyn, George, Catherine Ford, Lillian, Nandini, Jean, Dielle, Nicole, Janet, Eva, Alexia
Action Items:
Operational
Review Niagara contracts and send back for execution - Dielle
Follow up with Lakeridge regarding contracts – Dielle
Follow up with ICES contracts (at a later date) – Dielle
Decide on Steering Committee Co-chairs at next meeting
Send draft of data planning as seen in Asana – Jasmyn
Network question to ask at each site - Draft the actual question around network access – Alexia - Subsequently contact sites and inquire - PSSP
Advisory Committees
Ask about logo usage in Youth and Family Advisory Committees – Chelsi and Jasmyn
2-Pager
Reach out to North Bay about what this document’s purpose is – Sarah
Edit current document before submitting to REB – Sarah, George, Aristotle
Training Evaluation – Steve (Jasmyn will reach out to him) https://edc.camhx.ca/redcap/surveys/?s=KWEFDTTJRP
Add to survey introduction blurb: “… that you attended at CAMH, and to allow us to assess the progress and confidence of the group of individuals performing each NAVIGATE role. Your feedback is essential … share your thoughts. Your feedback on this survey will be anonymous.”
Make question 1 (which sessions they attended) not optional, and remove the “prefer not to answer” answer choice
Add options to the “Knowledge” section of the survey for SEE and director training
Add 4 question versions: “In the [IRT/Family/SEE/Director] training session that you attended, do you feel that the balance of didactic lectures versus. roleplay was adequate for you to build confidence in implementing the [IRT/Family/SEE/Director] material?”
Add to the training evaluation section of the survey: “Do you feel that there are any gaps in your knowledge of NAVIGATE or in your ability to implement the material you have learned?”
Qualitative Interviews – Juveria (Dielle will reach out to her)
Who is leading this?
What is the timeline?
Who will actually be doing the interviews?
Will we need a protocol amendment to account for the change in timeline?
Do we need to revise qualitative interview guides and resubmit to REB?
What does the data look like after NVivo? For database planning purposes
Minutes:
Contracts
Niagara Contract is almost complete. There are no more changes, they are ok with the proposed revisions. We need to clean up and send back for execution.
The only pending contract is from Lakeridge, the manager hasn’t heard from Legal but Dielle will follow up. The hope is to have contracts wrapped up by the end of March.
Dielle hasn’t inquired about ICES contracts, not as much of a priority right now but she will follow up.
Miscellaneous
Who will be co-chair for Steering Committee? Aristotle usually does it, George sometimes does it, but we can decide at the next meeting when more people are here.
Budget item in agenda: Dielle talked to Lillian and they’ve dealt with it.
EPI-SET Logo
1. Out of the preliminary data management meeting, it was decided that we want everything to look consistent.
2. There is a draft logo that Steve made in the survey that was circulated.
3. How widely do we want to use the logo?
Would it be included with the newsletter/summary progress report? Usually that would be on CAMH letterhead, which we would likely stick with for this purpose.
Regarding the other 2-pager sent out from training, it is less official, so would the logo be on that type of thing? Would only official documents have CAMH?
EPI-SET logo will be on the 2-pager, REDCap, everything except reporting documents.
We can look at Youth Can documents for reference – there is very little CAMH specific content on their documents and website.
Will people recognize EPI-SET logo?
Anything REB approved – can have both logos.
Anything that’s for the sites, can have EPI-SET only because they already know what EPI-SET is. If anything for patients/families, then also have CAMH logo?
With the logo – is there any back story? The idea is for consistency, Steve made the logo.
We don’t want to appear too top-down, the individual programs are delivering the model. There is some trust and recognition of CAMH (from conversation with Lillian) but this may not be universally perceived in other parts of the province. We need to be mindful of the perception of the top-down idea of CAMH telling everybody what to do vs. an investment by the individual programs. What would Lillian and Augustina think would be important? This will be put as agenda item for advisory committees.
If giving to sites to give to patients – they may then be confused about having CAMH branding, may need to give to site REB for approval as well, in which case we may need the institution logos. We need to be clear what the purposes of these documents are.
Having huge CAMH branding would also make it identifiable as mental health related.
2-Pager
1. What was the purpose of these 2-pagers?
The ask was less what EPI-SET was, but more what the relationship with CAMH is around what the sites are doing.
Do all pieces of information need to be in the handout? Is this still the ask?
Some of these general information tools are more of a moving target as clinicians get more familiar with what they are a part of. Other pages seem to contain some similar content as consent forms.
Sarah will reach out to North Bay about what they want from this sheet (the request from the Zoom call).
Nandini thought it was to be used to recruit patients into the study, though this may have been a separate request.
This sheet may be more in line with what the coordinator is going to go over with the patient, rather than the site. We need to be clear about who the intended audience is for this – we do not have an REB approved handout for participants. In the past, we have made a 1-pager for clinician use that can be given to the participant. The piece that needs to be clarified is who the intended audience is for each document.
If for the client – what is NAVIGATE? Needs to be clarified.
Nandini: the original aim may have been to share with clients who may be interested in participating.
We will adjust the document after Sarah connects with North Bay. We need to be mindful of language – is NAVIGATE research vs. people are actually part of intervention model even if they don’t want their data shared, so we would need to work on the wording. People are not opting in or out of the intervention model, they are consenting for their data to be used. Even clinicians have had questions about this.
Clinicians are not doing the consent process.
No in-person contact between research team and patients – all over Zoom or over the phone with research staff.
Training Evaluation
1. Not much asked about the actual training itself (overall training feedback section).
2. Is there any interest in getting feedback of specific components of training?
E.g. how did they find the director meeting at the end of the day? How did they find the individual component sessions, was there a good balance of didactic vs. role play?
Add section about the training types, so based on the sessions they attended, we could ask them to rate how adequate the distribution of didactic versus roleplay was overall in building their competence to implement their role.
Are we explicitly assessing perceived gaps? Would they know? Maybe another survey 3-months post implementation? At the end of the year of consult calls, or at 6 months, maybe we should check in again with the teams with these questions and adding a question about gaps.
Somebody might not have gotten something from the training because they aren’t good or they didn’t understand something. So it also needs to be considered in the sense of assessing where they think they are at, especially if we think they should have gotten something. For the question “Do you intend to make changes in your practice as a result of this training?” If they say no, or unsure, that could be a real problem and we would need to follow up.
Should we ask about gaps in the current survey, or ask 3-6 months and then ask as a follow up? We can ask now, and see if they think there’s a gap but we actually did it, we might be able to help them. Also need to know what they didn’t get so we can help them from the beginning (in training application section) and they may be more thorough as a result. We can also use to shape ongoing training calls with NAVIGATE trainers.
Also – point number one, why is it optional/why is there a “prefer not to answer?”
It will be kept anonymous as well.
Whose action item would this be? Faisal did the questionnaire, Steve added them. Ask Steve to add these to the survey.
In Knowledge Section, SEE and director isn’t in there, so we will add that as well. Director training is finishing on March 8th site call.
Timeline
As part of data collection working group, we started to have meetings to think about what the structure will be from a REDCap perspective.
Part of what we suggested was that there is value in consistency in the way surveys are going out, want same feel in website. Naming of variables on back-end needs to be done in a consistent way.
Outcome #4: meaningful engagement, it wasn’t clear who is leading that, and what the timeline is for that so we can have a map for when that happens, and if there needs to be a protocol amendment. - Juveria was supposed to take the lead on that. For the REB amendment, we sent the qualitative interview guides but they probably aren’t the final versions. Does she have a timeline in mind? Based on that and other moving parts, we should figure out what makes sense and how that timeline fits with everything else going on. What does the qualitative data look like after NVivo? Who is doing the interviews?
ECHO evaluations
We are going to do ECHO pre and post close to when we start the ECHOs. 2 weeks to a month before we start the sessions is usually when we do that. In terms of timeline then, that would be early summer, if we are still on track for a July start date.
Data planning: Jasmyn will send around asana screenshot.
Technological limitations
The issue with remote assessments, there are concerns about a participant being logged into a staff computer on an internal network (unsupervised).
There are also concerns about securing devices, having multiple windows open.
Is there a guest network that we could use instead of an internal network? That would get around privacy concerns, then we need to figure out where the device will be located, and be able to secure it there.
For Jean’s study, they didn’t have anything like that. Everything done remotely, the rater was there and asking direct questions. Patients only remotely talked to the interviewer. - The rater at Hillside could see them, and there was always a staff person with their eye on the door.
Could pull Jonathan in from ECHO to ask about network issue.
Is there an option like the guest wifi login? If so they can log in separately. If not, then get specific tablets or Lenovo Mixes that can log in separately.
Privacy piece – need to see if we can serve patients at home potentially, may be some approaches that we may be able to use.
Individual computer literacy piece: a separate question about how we can do this.
How are questionnaires divided in terms of what the participant is doing versus what is rated by an assessor?
First issue: network access. Local PSSP people - can they ask somebody about that (put specific question in an email – Alexia). Then can ask about TeamViewer, etc.
Comments