The limited role of preregistration in observational studies

Transparency, and open science, are crucial in all stages of research, this includes prospective registration of studies, code and data sharing and high-quality reporting of studies. Open science practices are important to avoid simple questionable research practices such as ‘p-hacking’, HARKing (hypothesising after results are known), and at the most basic level, selective reporting ie. picking the ‘best’ results to include in a paper. Although with recent high-profile examples of research fraud and data manipulation, resulting in potentially millions of dollars wasted, the need for transparency in research is becoming even more important. Preregistration is one important method used to reduce questionable research practices and is typically strongly recommended by the open science community. However, there are pushes for preregistration of studies to be mandated, this is currently focused in the domain of clinical trials (as it should be), The focus of this post will be on the role of study preregistration in observational research using real-world data, which is an area rapidly growing in popularity and being used as evidence by decisionmakers to inform policy.

Why do we preregister studies?

At its core, study preregistration is simply the act of specifying your plan for your research before you start it and sharing this on a register (ie. clinicaltrials.gov, OSF, etc.). Preregistration’s benefits have been clearly demonstrated for all research with prospective data collection, ie. clinical trials. Because of this, I will only briefly touch on these.

In clinical trials, it has been recommended by the International Committee of Medical Journal Editors (ICMJE) that all trials be pre-registered. This has slowly been adopted in journal policies, however, is far from being universal. The reason for this recommendation is that preregistration allows people (ideally in peer review) to check the submitted manuscript, against its registration for deviations, making any selective reporting, p-hacking and HARKing more visible, reducing its prevalence. Selective reporting is probably the most common and occurs when the results which are not sexy or positive are omitted from the publication. This can result in conclusions of studies being misleading, and results in a lot of research never being published, contributing to up to 85% of research being wasted. Preregistration allows this to be noticed and questioned, hopefully reducing the occurrence of selective reporting. Essentially, preregistration reduces the ability of researchers to make changes to their methodology after finding out their results, reducing the ability for the results to guide the methods.

A common concern is that preregistration locks authors into a plan when the research process should change as more information is gained. This isn’t the case, as there is nothing mandating that the protocol has to be rigid, being completed exactly as was intended, however, preregistration provides a base to explain why these changes took place, which requires authors to provide justification for any changes, improving how trustworthy their research is.

Clearly, preregistration is important for clinical trials, however, the ICMJE made an important distinction and made observational research exempt from preregistration. However, a typical ‘open science bro’ might say everyone should preregister everything because it’s always going to reduce questionable research practices. However, that may not be the best solution.

As discussed above, we preregister studies to reduce selective reporting, p-hacking, and HARKing. However, preregistration only reduces this if the data collection has to occur after registration, and it can be empirically verified. This is becoming more common in clinical trials but is not the case for observational studies.

Preregistration in observational studies

Preregistration is (relatively) easily enforced due to the requirements of clinical trials and prospective data collection. However, what about when the data already exists? There are two examples which I think support why preregistration does not benefit observational research much. Let’s say the authors of a paper are a bunch of researcher-clinicians who work at a hospital, they think, “I think we should have a look at the effect of drug X.” They get a waiver of consent from their ethical review committee and collate all the data. They have a look through it, run a few analyses and come up with an interesting result.

How would preregistration benefit us here?

Perhaps preregistering their observational study might make them think more about their study design, but they also don’t know what data they actually have yet, they have a broad idea but feel like it’s not good enough to write up a study plan listing everything required for their study. Basically, they don’t want to shoot themselves in the foot by specifying they need a certain confounder measured, prior to knowing that it is indeed measured in their data. You could say, this is no excuse, if you can’t do science transparently, or don’t have the data to conduct the desired study, don’t do it at all. This is definitely true. Having less, but higher quality research would benefit everyone but that is not what happens in reality. Our authors decide to go ahead with the study and hold off ‘preregistering’ the study until they can figure out what they have measured in the data. they register the study, knowing that nobody can verify that they didn’t have access to the data prior to preregistering. In this scenario, preregistration is useless.

Another example, which is becoming increasingly common is the use of ‘big data’ in research. To access this data, which may include electronic health records, claims or other linked data, researchers typically require approval from the data custodians. Currently, for the large data custodians of these sources of big data (the many Nordic registries, the US Veteran’s Affairs health database, or Medicare claims database) to release the data, the process involves submitting a research proposal which is assessed and approved prior to access to the data is given. I am not aware of any custodian that requires this proposal to be publicly registered. If this was required, preregistration would have the same beneficial effect for observational research from ‘real-world data’ as for clinical trials. This is because it would be verifiable that the registration occurred prior to exposure to the data. However, even in the most structured cases, where data custodians grant access to the data, this is not the case, so if an observational study is preregistered, we just have to trust researchers that they did indeed register it prior to having access to the data, and we’re left in the same position as last time, preregistration relies on trusting the researchers are telling the truth, an assumption which is increasingly harder to believe.

So the goal of preregistration is to reduce selective reporting and HARKing, where results are chosen after the fact based on how much they support the researcher’s agenda or will help them get published. In clinical trials, preregistration works as an excellent mechanism to reduce HARKing, but is not very effective at all for observational studies, as the timing of data access is not measured nor public, therefore it is impossible to verify the ‘pre’ in preregistration. You could still argue that most researchers will act honestly and therefore on average preregistration will improve how thoughtful authors are in designing their study and reduce HARKing. But what if there were negative effects of preregistration?

Every decision requires weighing up the benefits and harms of the effect of that decision. Something not often discussed are the potential ‘harms’ of preregistering studies. These harms could include dichotomising evidence into preregistered or not, resulting in the dismissal of studies not preregistered as being inferior, or at a higher risk of bias than their preregistered counterparts. This is only true if indeed preregistering does reduce the risk of bias, which in observational studies, is unclear. Indeed preregistering a study may be associated with higher study quality as researchers more likely to preregister may also be more likely to conduct a rigorous study, however, preregistration may not cause them to conduct a better study. Therefore, preregistration becomes an erroneous measure of quality, which could lead to ignoring high-quality evidence that is not preregistered and overstating the quality of low-quality evidence that states that it is preregistered.

In summary, there are many benefits to preregistration in clinical trials as it is easier to verify that registration occurred prior to data collection. Currently, no mechanisms exist in the infrastructure of observational data to verify preregistration, diminishing its beneficial properties. Further, as preregistration may not improve the quality of observational studies, it becomes useless as an indicator of quality, and potentially harmful if used in evidence synthesis or decision-making.

Is there anything that could be used instead of preregistration to reduce selective reporting or HARKing?

Yes, well, at least partially. It appears that specifying a target trial, ie. a hypothetical randomised trial that would be conducted in order to answer a causal question, and then emulating that trial as closely as possible with observational data improves causal inference in observational studies. This is emerging as an important methodology in observational research and is being used to provide evidence to inform decision-making. Like preregistration, emulating a target trial cannot reduce selective reporting or HARKing itself, but it will make it more visible, which by extension, may reduce authors’ willingness to undertake these practices. In observational studies, HARKing or p-hacking could be done by developing a specific combination of eligibility criteria, or outcome definitions. Importantly, this selection could be done after testing many different criteria, choosing the final one based on the results. By clearly specifying the target trial and emulation, these unusually specific criteria can then be noticed in peer review or by others, and the quality of the study can be more easily appraised. Alternatively, like in clinical trials, data custodians could begin to require public registration of the target trial protocol prior to giving access to the data. Although even this has limitations as often many studies could come from one dataset, it may not be feasible to require authors to register protocols for all potential studies from that dataset. Another opportunity is to more freely share data to allow others to reproduce and verify results, and despite barriers to this dissolving over time, it remains a rare practice when using ‘big data.’

Conclusion

Ultimately, increasing transparency in research is clearly important, although preregistration of all study types without regard to the actual implications of this should not be recommended in the current research infrastructure. Despite the benefits of preregistration in clinical trials, these are unlikely to extend to observational studies as it is currently impossible to verify whether the study is indeed ‘pre’ registered. However, explicitly emulating a target trial, and reporting the protocol and that of its emulation may reduce selective reporting by making it more easily visible.

Get what you want, more often: how to communicate with busy people

In any job, it is likely that you’ll have to communicate with people in writing, be it applying, asking for help or for others to do something for you. This is something we do without much thought, we just write what we want to convey. But, have you ever sent off an email and had no response? Crickets…

Everyone is very busy. If you asked your colleagues, friends or family, I can almost guarantee most would say they wish they had more time. People get on average 100 messages a day, and in the working world, 100 emails a day. So when we are communicating with people, we are asking them to give us some of their already precious time, amongst a sea of other requests. When I’m asking someone to give me some of their time, it’s important I make my request as clear as possible, and make the time-commitment as little as possible because that means they are more likely to do whatever it is I’m asking.

There is a great talk by Todd Rogers from Harvard University detailing the science of corresponding with busy people. However, on the topic of saving time, here are the five points on how to more effectively communicate with people.

1. Use as few words as possible

  • We think we really need to set the scene for a request, often leading with an explanation of what we are doing and finishing with a request of the reader. However, reading big blocks of text makes most people’s eyes gloss over. A more effective way to communicate is to just cut to the chase. Introduce yourself and ask your question, cut the lengthy explanation in the middle. Rogers showed when the number of words is reduced, focussing on only the main point, there was a 78% higher response rate.
  • This is counter intuitive, because we would assume more information explaining why someone should do our request would improve response rates, but it doesn’t, it just makes people lose attention.

2. Make the text as easy to read as possible

  • The more effort the reader has to put in to read your text, the less likely they are to respond. Think about most academic papers, it’s really hard to focus because it is often written to be understood by experts. Compare that to a buzzfeed article, it’s hard to stop reading it is so simply written. Obviously they are two extremes, but the point is, if you write something as basic as possible, it will be more likely to be read.
  • The way to do this is to reduce the number of syllables per word, ie. simple words; reduce the number of words per sentence, stick to one simple idea; and keep the grammar simple.

3. Use formatting to direct attention

  • If you write something in a block of text with no paragraphs, no highlighting or bolding, you’re not letting people skim easily, making them less likely to actually take any of it in. You WANT people to be able to skim your writing, so make the important parts bolded or highlighted. I’m sure when you read the paragraph in point 1, you read the heading, then the highlighted section, with my main point.
  • Highlighting is a powerful way to draw attention, but it can also work against you, because it makes people less likely to read the rest of the text, so use it carefully. Also, do not overuse it, because if you highlight a whole paragraph, it loses its power, and nothing becomes highlighted. Think back to a time when you highlighted paragraphs in textbooks, I guarantee not much of it stuck.

4. Make the key information obvious and noticeable

  • Ensure that your main points jump out to the reader immediately. Think the subject of an email, the first sentence, last sentence, bolded/highlighted sections. Allowing readers to skim the text makes a response more likely.

5. Make the response required as quick / easy as possible

  • If the person who is reading your request has to jump through hoops, no matter how simple it may be, the request is less likely to be met. A strategy I’ve employed is to re-phrase open ended questions like “What do you think about this …” which takes a long response, to be a closed-question, which is much simpler for the respondent, “I think this… please let me know if that is okay.” This just takes a simple “Yep, sounds good” from the respondent, making a response much more likely. Even better, depending on the situation, you could also say “I am planning on doing XXX, please let me know if that is not okay.” By doing that, you are making them opt-out, so you don’t even require a response.

Summary

Everyone is very busy so its important our communication is as clear and easy on the reader as possible

  • Write as simply, and as briefly as possible
  • Use formatting to allow people to skim the most important parts
  • Make the response as easy as possible

I recommend watching the video for a more in-depth explanation of how effective these strategies are and some more concrete examples of them.

Review and Tips of an Honours Year

Ten months ago I started my honours degree, a Bachelor of Science (Honours). I had just completed my Bachelor of Exercise Physiology, a four year degree, and there was no specific ‘exercise physiology honours’ so I had to do it through the faculty of science. For this degree, the year is made up of full time research under the guidance of your supervisor, who you can find yourself, you submit a literature review, do a talk on that, then submit a final manuscript and do another talk.

This blog will be a bit of a rollercoaster describing my reasons for doing honours, my experiences throughout the year and I will attempt to condense down the most important things I learned throughout the year. This will be somewhat specific to allied health degrees but I hope that there will be something that any prospective honours student can take away from this. I hope my experience helps, and if you have any questions, I’m always happy to chat, just follow the link at the bottom.

Why honours?

This was one of the most common questions I got asked, and fair enough. In my university, the University of New South Wales, the typical path for an exercise physiology student is either to complete the undergraduate degree and become a practitioner, OR if you were interested in research, you would sign up for a masters by research in the faculty. When I first considered doing honours was not aware of any other students looking to do the same thing. So why?

First of all, I realised in my final year that research was something I was interested in but wanted to experience doing it full time before I made any long term decisions. I was talking to a PhD candidate about his experience in research and he mentioned that the best way to set yourself up for a PhD scholarship (if I wanted to go down that path) was to do honours, rather than masters. Initially I didn’t know why, but I trusted him, so decided to sign up. It was only a year. If I didn’t like research, then that was that chapter closed and I hadn’t spent too much time on it.

Choosing a Supervisor

Choosing a supervisor is, I believe, one of the biggest decisions for the whole year. A great paper from Monash University (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0236327) demonstrated that for PhD candidates, the research environment (ie. the group do you the PhD with) predicted success of the student, MORE than the student’s academic ability. I strongly believe this also applies to honours. I was lucky enough to be asked to help out on a project in my fourth year within a large group in my faculty, the McAuley group in the Centre for Pain IMPACT at Neuroscience Research Australia. I had a good experience, and my supervisor, James, is a highly successful researcher and the group was excellent, so I decided to approach them to supervise me for honours. There are countless websites out there discussing what to ask a supervisor, and my only advice is to choose someone you think you’ll like, and who has a good track record of supervising successful students. They’ll often share things like that on their twitter or elsewhere, I saw that my supervisor had multiple students publish 20+ papers in their PhD, which is a very positive sign.

What did I want out of the year?

I really wanted experience, first and foremost, but once I had come to the conclusion I would do honours, I had to work out what I wanted out of it. At the start of the year, I really had no idea what I’d be doing the year after I finished, but I knew I wanted to learn as much as possible, and as many skills (preferably transferrable ones) as possible. However, when I told this to my supervisor, he said, that’s great, but if you want to continue in Academia, you need publications.

I am a firm believer that you should always set a high bar for yourself, and in aiming high, you keep all your options open. My other goal for the year was to come out competitive for a PhD Scholarship. I realised that getting a PhD scholarship was quite difficult (duh), particularly if you were aiming for an NHMRC scholarship, which is one of the most competitive in the country. I figured if I was able to get that, I’d be doing pretty well, and would be well equipped for any other direction I wanted to take (ie. a medical degree).

So were the most important things I learned?

Take notes on all existing literature:

The first 2 months I started searching for every paper in the field, identifying the most important researchers, and summarising the most important papers. Now this was quite tedious, and eventually became quite laborious, but I found that if I didn’t read them in enough detail, and slowly to take notes, I would forget, or not fully understand the main points, which would form what I did in my project for the rest of the year. This is the easiest step to think to rush, because you just want to get started with your project, but extra time spent here until you feel like you know the gaps in the field inside-out is well worth it.

Write to clarify your understanding:

After the first week I had started drafting my literature review, which was due about 8 weeks after I started. The reason I did this was to make sure that everything I was reading was (1) going into my head, and (2) all forming a coherent narrative which would form the basis of my literature review, identifying the key areas my project had to address.

Most people would finish reading, then plan, then write. Although I find that writing helps clarify my thinking, identifying spots where I don’t know enough, which then guides more reading. If you do this early enough you have time to write, and re-write as your understanding progresses. I think I completely re-wrote my literature review about three times. Initially, I didn’t constrain myself to a word limit, any sort of quality, or style, I just wrote. I knew the first time I wrote something, it would not be as concise and clear as possible, because I was only just understanding the concept, and overtime, paragraphs became single sentences, and my work got more and more refined with each re-write.

Get things done early:

I knew I wanted to work on other projects throughout the year, so in order to be asked to do so, I felt I had to show that I was reliable. To do this, I aimed to get drafts done weeks in advance, showing that if someone wanted work done quickly, I was a great person to hand it to. This is advice I would give to any student or researcher wanting more responsibility. I quickly learned all researchers are very time poor, so if someone else is able to help with some work, and do it quickly, they would be very useful. So I wanted to show I would be very useful.

Getting things done early is good and bad. In completing tasks related to honours early, I ended up actually having a lot of spare time, but nothing to fill it with. I love being busy so having all this spare time made me feel like I was wasting time, and gave me a somewhat existential feeling of languishing. However, most normal people have hobbies they spend time on, so I don’t imagine most would encounter this issue.

Do additional work:

Your ability to do additional work really depends on your project, I was conducting a survey online, so once I had established the survey and had started sharing it, I had a lot of spare time. Even when we would be recruiting, it wouldn’t be a very time consuming task. I probably spent on average ~10-15hrs a week on my honours project, leaving about 20 hours to spend on other projects. However, no matter the project there is always space for us to make more time for additional work. I made a point to show that I was a reliable worker and did good work. I handed drafts in early, rapidly responded to feedback, and tried to help out others. After my literature review and introductory talk was finished, I got asked to lead one, and then two additional papers which were unrelated to my honours. This is exactly what I wanted. I grabbed this with two hands and ran with it. These additional projects would become publications (both are currently under review with journals), so this was definitely working towards my number of publications, setting me in good stead for a scholarship.

However, more importantly, another goal for the year was to learn new skills, these projects forced me to do that. These two projects were both on the topic of meta-science, which meant the data collection was not as time consuming as my honours project. However, they both used very different methodologies. One looked at reliability, so I had to a) understand reliability and b) learn how to code and determine it. The other looked at using intervention reporting guidelines to do an overview of all the existing literature on how well studies reported exercise interventions. Both of these required me to learn how to use many different coding packages (using R), run various analyses and create many different figures, all of which I’ll be able to use in the future, but also improved my coding skills remarkably (a very transferable skill).

These additional projects also forced me to be able to manage time effectively, because at any one time I was leading three different projects. My focus would often shift to one when it needed to, then another, and the other, and without even thinking, I ended up staggering them so that they would never all require my full attention at once, making the workload very manageable.

Write early, and write long:

When it came to writing up my thesis, I followed the same principles as when I wrote my literature review. My thesis was limited to 5500 words, and my first draft was over 8000 words. I re-read it, and re-wrote it several times before showing it to anyone (which was another tip I learned, be your own first editor – more about the tips I learned for writing here[link to writing post]). This reduced the burden on my colleagues, but also allowed them to not have to worry about silly mistakes (although they were definitely still there) and focus on the more fundamental flow and concepts in the thesis, which hopefully allowed it to be higher quality in the end.

Summary

Overall, my honours year was incredibly positive. I had a great time during it, I was never incredibly stressed (wild, I know) and I put that down to my organisation, as well as the project I had. I also had a very productive year in terms of publications, I have two first-author papers under review and published four letters to the editor and a first-author editorial just accepted. I am very confident that these achievements have been largely due to finding an incredible group and supervisors, as well as trying to do what I’ve described above.

Effectiveness and Efficiency

Whilst putting together a research proposal I stumbled across a quote from Archie Cochrane, one of the fathers of evidence based medicine. The proposal was about how results of randomised controlled trials generalise (or don’t) to the general public and the quote fitted perfectly.

Between the findings of an RCT and benefit in the community … there is a gulf which has been much underestimated.

Archie Cochrane, 1972 (Effectiveness and Efficiency)

I thought this was a great quote for many reasons, it fit in my proposal incredibly well and it’s always nice for an expert to have had similar ideas. But mostly, I was fascinated that similar concerns to that of my research had been around for 50 years! So I managed to get a copy of Archie Cochrane’s book where this was mentioned and have a read.

Wow, was the book an amazing read! He detailed what he believed the main issues facing the NHS (The UK’s National Health Service) were. What shocked me is how similar many of the issues were to ones we still face today; in Australia and across the world. In this post I’ll outline just one of the issues Cochrane discussed.

Cochrane opened the book with a statement from his time as a medical student in the 1930s where he would protest that “All effective treatments must be free”. This is the premise of the book and is not as straightforward a statement as you might think.

First, what does effective even mean? In order for a treatment to be deemed effective, it must reduce the burden of a condition MORE than the natural history. This is a fascinating point and something often hard to determine, which is why we need randomised controlled trials. Sometimes a condition may improve with time, even without intervention, so if someone does intervene, ie. a doctor giving someone a drug or surgery, and the patient subsequently gets better, it can be easily mistaken for the treatment to have caused this improvement. Without a randomised controlled trial, relying on anecdotal, or observational evidence this can lead to ineffective treatments being used across a country or the world. A well designed randomised controlled trial can show beyond doubt that a treatment CAUSES an effect because there is a control group, who ideally undergoes everything as the intervention group does, without the active treatment. Natural history is often the reason control groups can improve in a randomised controlled trial even though they have had no treatment. This is also why it is crucial to measure BETWEEN-GROUP differences in a randomised controlled trial, because it is the difference which you are looking for, however, this is often not the case because many treatments tested are indeed not effective, making the studies harder to publish. This is a completely separate conversation to be had, but a useful point to note.

Cochrane explains there are many treatments which become embedded into health systems that are not effective. His example from the 70s was that having people with acute ischaemic heart disease admitted to hospital and coronary care units was seen at the time as crucial to improving outcomes, it was also very expensive because almost every hospital had to set up specialist coronary care units. However, later a randomised controlled trial showed that being admitted to a coronary care unit had no better outcomes to treatment at home. If this randomised controlled trial was done PRIOR to embedding coronary care units in hospitals across the country, they likely would have saved millions of dollars because they would not have had to do so.

This seems very abstract and outdated, for all I know this may not be the case anymore, so I’ll discuss an example where a routine procedure proves to be no better than a placebo.

Knee arthroscopy, or getting the knee ‘cleaned out’ is STILL a very common procedure in people with knee issues, meniscus tears, osteoartritis etc. However, way back in 2002 a blinded (meaning the patients did not know what treatment they received) randomised controlled trial compared knee arthroscopy to sham knee arthroscopy. Sham knee arthroscopy? I hear you say, well this seems ridiculous, but the surgeons did EVERYTHING but the actual debridement of the knee joint, meaning the patients underwent surgery where they got cut into, in the same point as the intervention group did, but they were immediately stitched back up having no work done.

Both groups had THE SAME outcome… Meaning, there was no extra benefit of actually doing the surgery. It was all a placebo effect. This is why, randomised controlled trials are IMPERATIVE to do before implementing new treatments into a health service, because things that seem to work may not always.

This could become a much longer blog post but I think just touching on the effectiveness of treatments is enough for now.

Isometric exercise MAY reduce blood pressure and appears safe!

High blood pressure is the leading risk factor for death across the globe, affecting 1.13 billion people, resulting in over 10 million deaths in 2019. Clearly high blood pressure is a huge problem, and the issue of treating this condition is much researched. There are many drugs which are effective at reducing blood pressure, but this has still not reduced the overall burden of the condition, potentially due to unwanted side effects. Exercise (aerobic and strength-based) has been shown to reduce blood pressure, but often is quite tiresome and time-consuming. Isometric exercise is a type of resistance training which involves exercises such as wall sits, or hand grips. Isometric exercise however is quick (usually taking just twelve minutes!) to undertake and relatively easy to do!

My team and I recently conducted a systematic review and meta-analysis looking at the effects of isometric exercise on blood pressure (https://www.nature.com/articles/s41440-021-00720-3). Systematic reviews are the highest level of evidence and usually are used to inform clinical practice and health policy.

We found that isometric exercise appears to be safe, in all populations including those with hypertension and older adults. This is an important findings as this was previously unknown and many clinical guidelines have not recommended isometric exercise due to safety concerns.

Importantly, we also found that isometric exercise may significantly reduce blood pressure. We found that it may reduce systolic blood pressure (the top number on a blood pressure reading) by 8mmHg, and diastolic blood pressure (the bottom number) by 4mmHg, both of which are clinically important amounts, similar to many other drugs!

It is important to note that I say isometric exercise “may” reduce blood pressure, because the quality of the studies included in our review was very poor, which limits how much we can trust them and their results. However, it appears unlikely the isometric exercise is dangerous as was once believed and could be another great tool for people living with high blood pressure who don’t like exercising, struggle to make time for exercising or have physical limitations which makes exercising difficult.

Have a read of our study and feel free to contact me if you cannot access the paper!

Reference:

Hansford, H.J., Parmenter, B.J., McLeod, K.A. et al. The effectiveness and safety of isometric resistance training for adults with high blood pressure: a systematic review and meta-analysis. Hypertens Res (2021). https://doi.org/10.1038/s41440-021-00720-3

Is paternity leave the missing step for gender equality?

I believe that men should be entitled to the same leave options as women after the birth of a child. No, not because men have to endure the same hardship as women when giving birth (obviously), but because by not having equal parental leave, we are forcing women to be the primary caregiver, which usually hangs around for life. Note: for simplicity, i’m going to describe a typical male:female nuclear family in this post, although I understand there are many issues for other make ups of couples which I haven’t discussed

By having maternity leave, and not a similar paternity leave, we are forcing women to sacrifice their work. The most common argument against there being a gender pay gap is that in Australia, women get paid (most of the time) equal to men for equal work, which is largely true, but many would agree that women typically sacrifice their earnings to have children, therefore, earning less over their career to a matched male who did not take time off. Now on the surface, this sounds fine and like it is the choice of the woman to take time off. Although in high-income countries the average maternity leave is 276 days and paternity leave, 56 days, thus women are coerced into taking this leave as they are more likely to get some benefits than the man, where there is no support set up (ie. equal paternity leave). This sets up the woman as the primary caregiver, and makes changing that further down the line much more challenging. Why? because if a mother takes a year of maternity leave and the father returns to work after 2-4 weeks, the father has had more time to progress in his workplace, therefore, will be earning more income, making it a financial burden for him to become the primary caregiver further down the line. It makes financial sense for the mother to just return part-time and remain looking after the child. This situation reduces the mothers future earning potential because she is less likely to be promoted as a part-time employee and then at the same age, is able to earn significantly less than the father who returned full time. A survey from PwC showed nearly 50% of mothers said they felt they were overlooked from a promotion because they had children, clearly showing the impacts of having children on future earnings.

That scenario is a relatively direct effect of having maternity leave without equal paternity leave. An indirect effect is that when age-matched males and females are applying for a job in their late 20’s and early 30’s, if they are of equal ability, it is logical a company with maternity leave and not paternity leave to choose the male, as they are more likely to have less time off in the event of having a child, making them a more productive worker. Now this may not happen everywhere, but I’m sure it does occur.

So whats the answer? It is clear that even though maternity leave affording women more flexibility in the short term, it actually stunts their career growth compared to an equivalent male. The answer is to have parental leave, which can be used equally between males and females, allowing the decision about who takes leave completely up to the family, if the mother would prefer to have more leave and be the primary caregiver, they are able to, but the father is equally able to make the same decision, it is not being made for them. Increased paternity leave or better, parental leave has shown increased earnings for the mother, reduced number of sick days taken by the mother and increased female employment in private firms (World Bank: 2020). It sounds one solution to improving gender equity in the workforce, but does it actually work?

In Iceland in 2019, they introduced parental leave for 9 months, with at least 3 months given to the father, and this has put Iceland at the forefront of gender equity in europe, and the world. Although social attitudes are still that the mothers should stay home to care for their children, but this is slowly changing as well.

Ultimately, I find it fascinating that paternity leave, something which is ostensibly in benefit of men, is actually a crucial part of creating gender equality.

I have no clue what I want to do in life and I’m happy about it

I’m currently in a period of immense doubt about my future and about my life. this isn’t something I’ve ever experienced before, and I’m so glad that I am going through it.

Throughout university I have always been sure about what I want to do with my life and how I wanted it to play out. It certainly wasn’t always the same path but I believed it with full conviction. After a year of university I wanted to do medicine, and I was 100% sure that this was the right path for me and I would definitely do it, then after a year of planning my life as a doctor, I decided medicine wasn’t going to be for me and I would do a PhD instead; for a year, I was set on doing a PhD. In my final year I continually changed from wanting to be a medical doctor, and getting a PhD, with increasing frequency. I decided to sit the GAMSAT (medical entry test) in early 2020 but not put too much emphasis on it, I got a respectable mark and conveniently forgot to apply for entry. I told myself this didn’t matter as I was going to undertake a PhD at that point, so I signed up for a year of honours to introduce myself to research.

After my final year, full of placement and research I was no closer to knowing what I wanted to do, but vehemently told people that I was going to do a PhD if I was asked. This must have given me comfort, sounding like I knew what it was I wanted to do with my life. I was lucky enough to finish my pkacement early, in mid december, giving me 2 months before I started honours to figure my life out. In this period I vowed to not fill my time with work or other commitment, rather have it as a period of contemplation. And boy, contemplate I did. I spent at least 20mins every day meditating (sometimes up to an hour) and wrote every day. I feel as though my turmoil was displayed pretty well in my writing, I was always writing about careers, finance or productivity as these were all the things on my mind.

During this time I spent a lot of time with friends, as well as learning how to surf which I am so grateful I did. I began honours quite apprehensive, something I’ve never experienced before in a situation like this. I realised I did not know why I was doing it. Over my time contemplating, I realised I don’t know what it is I want to do with my life, but by reaching high earlier (ie. choosing medicine or a PhD) it meant I worked as hard as possible to allow myself to be afforded the choice when crunch time came. I believe very high goals are the best thing you can do for yourself in a period of doubt, because it is much easier to elect to take an easier path down the track when you’ve been aiming for something harder than it is the other way around. Although this isn’t without cost.

I realised I had spent so much of my time at university being focussed on my academics that I did not spend a huge amount of time with my friends, I wasn’t the most social person, and I believe my social skills deteriorated because of it. Now, I don’t mean to say I was a hermit, I still socialised, just not as much as I would have liked when reflecting back on it.

Now, I’m in a position where I have absolutely no clue what I’m going to do next year, and I love it. I love it because I think I’m moving beyond the typical social script of ‘do well at university, get a good job and earn lots of money’. One of my biggest fears is committing to a path I don’t truly enjoy, and thats the path I believe I was on. Maybe I do want to do a PhD or medicine, but maybe I don’t. I’ve stopped saying I know my path and I’m just going to enjoy it and see where it takes me. I’m focussing on the things that matter, my health and fitness, my family and friends and just developing more as a person, rather than as a student. I never thought I’d be so happy to be so lost.

PS. I have been reading a lot of books which helped me come to this realisation, this may be of interest to some people so here they are:

Range: How Generalists Triumph in a Specialized World – David Epstein: the most successful people are not those who work the hardest and specialise, but those who diversify, are curious and have broad interests and passions, as they allow your thinking to be flexible and not rigid.

Buddhism without Beliefs – David Batchelor: A great book describing buddhist beliefs without the religious aspect behind it, a great introduction to a different way of seeing the world

Ego is the Enemy – Ryan Holiday: A great book detailing all the ways our ego will be our downfall, and how to reduce it’s negative effects

When is a healthcare intervention actually ‘worth it’ to a patient? – The smallest worthwhile effect

When seeking to identify if healthcare intervention was successful, we typically look for statistical significance in a change to prove to us that the effect wasn’t down to chance. This very good and is most widely used, but simply statistical significance (ie. p < 0.05) is not enough to actually say an intervention was worthwhile. In a study with a lot of participants, it is possible to find a very small change to be statistically significant, and that change may actually have no importance to a clinician or a patient. Let’s use an example, if there is a new drug for blood pressure and it is studied and the researchers find that it reduces blood pressure, with statistical significance, by 1mmHg, this effect does not mean anything clinically, thus, it likely isn’t an intervention you would recommend. However, if another drug reduces blood pressure by 10mmHg and is statistically significant, most people would also consider that change clinically important. That 10mmHg change may be the difference between someone being hypertensive (>140 / 90mmHg) and moving to being in the high-normal category (130-139 / 85 – 89mmHg), which has important reductions in risk of a cardiovascular events. But how much of an effect means something is clinically important, or, what is the minimal clinically important difference (MCID)?

The MCID was defined in 1989 by Jaeschke et al. as “the smallest difference in score in the domain of interest which patients perceive as beneficial and which would mandate, in the absence of troublesome side effects and excessive cost, a change in the patient’s management” (1). In my mind, the key part of this statement is that it is the smallest change which patients perceive as being beneficial, because, at the end of the day, if we as clinicians are giving interventions which are not likely to make the patient feel better, what is the point of administering them?

The MCID has been defined for many different things, from the six minute walk test (6MWT) to ratings of pain, but the methods used to determine the MCID raise some questions about whether they are truly patient centred. Let’s take pain for example, pain can be measured on many different scales but lets use the 11-point numerical rating scale of pain (NRS-P) which goes from no pain to worst pain imaginable. The way the MCID is calculated is by putting patients through an intervention and asking their pain on the 11 point NRS-P at the beginning and end of the treatment, to determine the change which occurred. Then, at the end of the treatment they are also asked how they feel overall on a global rating scale, ie. do they feel the same, slightly worse, much worse or slightly better, or much better. The responses on both these scales are compared, and the change in scores which most closely correlates with feeling ‘slightly better’ or ‘slightly worse’ is determined to be the MCID.

This sounds pretty good on the surface, but it is the researchers or clinicians who decide that patient’s only have to feel ‘slightly better’ in order to see a clinically important change. But what if patient’s want to feel much better and only ‘slightly better’ wasn’t actually worth it for the treatment they went through? These are the major limitations of the MCID, it doesn’t factor in either the patient’s view on what amount of change is important, nor the costs, risks and inconveniences of the treatment which produces the effect.

In 2009, Ferreira et al. (2) coined the term, ‘smallest worthwhile effect’ which is intervention specific and factors in the costs, risks and inconveniences of the intervention. To demonstrate the importance of having intervention specific measures, let’s imagine two patients were to undergo different treatments for their pain, one had major surgery and the other attended a series of educational sessions with a clinician, if the MCID was a reduction in pain by 2 points on the 11-point NRS-P, and both patients achieved a reduction of 2.5 points, would both patient’s be equally happy? Would they both consider that they saw a clinically important change? Probably not, because the surgery has much more severe costs, risks and inconveniences.

When calculating the smallest worthwhile effect, patient’s are explained the intervention, then asked what effect, over and above the effect of no treatment, that would make the intervention worthwhile to them, considering the costs, risks and inconveniences. They are then asked by the clinician, “what if that effect was 0.5 points less? would that still be worthwhile?” and this is repeated until they don’t consider the treatment worthwhile, and thus, the smallest worthwhile effect is established for that treatment. Another aspect of the smallest worthwhile effect is that the hypothetical effect size patients are considering is in addition to the natural history of the condition. For low back pain, most people see a 30% reduction in pain over the first few weeks of a flare up, thus, the effect of any intervention must be over and above this natural recovery or regression to the mean.

The current research (3 & 4) on the smallest worthwhile effect for pain has looked at several different physiotherapy interventions and non-steroidal antiinflammatory drugs (NSAIDS) in low back pain, and there are definitely many treatments beyond this, thus, for my Honours year I’m conducting a study looking to identify the smallest worthwhile effects for different interventions for low back pain.

So why is this important?

The value in knowing the smallest worthwhile effect of an intervention is that it enables clinicians to know, on average, what effect patients consider worthwhile from different treatments. From there, they are able to identify whether those treatments are indeed able to produce that effect. For example, if a patient believes that in order to take a drug for their pain, they would need a 3 point (on an 11-point NRS) reduction in pain in order to make that treatment worthwhile compared to no treatment, considering the side effects of the medication. But the clinician knows that the best evidence shows that the drug in question typically only reduces pain intensity by 1 point, so they may recommend other treatments which have a more favourable cost-benefit profile, or, a smallest worthwhile effect which aligns with the efficacy of the treatment in question more closely. Ultimately, it is crucial that we in research ask patients what they think of the interventions that we are applying to them, and get their input into whether a treatment actually ‘works’ and is worthwhile from their perspective.

References:

(1). Jaeschke R, Singer J, Guyatt GH. Measurement of health status. Ascertaining the minimal clinically important difference. Control Clin Trials 1989;10:407-15.

(2). Ferreira ML, Ferreira PH, Herbert RD, Latimer J. People with low back pain typically need to feel ‘much better’ to consider intervention worthwhile: an observational study. Aust J Physiother 2009;55:123-7.

(3). Ferreira ML, Herbert RD, Ferreira PH, et al. The smallest worthwhile effect of nonsteroidal anti-inflammatory drugs and physiotherapy for chronic low back pain: a benefit-harm trade-off study. Journal of Clinical Epidemiology 2013;66:1397-404.

(4). Christiansen DH, de Vos Andersen NB, Poulsen PH, Ostelo RW. The smallest worthwhile effect of primary care physiotherapy did not differ across musculoskeletal pain sites. J Clin Epidemiol 2018;101:44-52.

P.S. this was to help me solidify my topic in my own head, ensuring I understand it, if you’re interested in it, hit me up

Intermittent Fasting: Thoughts after 50 fasts

In November last year I decided I would experiment with intermittent fasting, or time-restricted fasting. Almost every day, for 50 days I restricted my eating to only 8hrs in the day, fasting for 16 hours. This meant I would usually finish dinner at 8pm, then not eat until 12pm the next day. If I went out and ate food until 9:30pm, then I would begin eating at 1:30 the next day. I talked about why I was doing this and the evidence for it here but essentially I had heard a lot about it and it’s benefits for health and figured, there aren’t really any risks, I’ll give it a go.

One thing many people asked me when I mentioned it to them was, “you don’t need to lose weight, why are you doing it?” I realised that fasting was most commonly seen as another weight-loss diet, it is the new fad. This question usually followed with me explaining autophagy and the health benefits of fasting, independent of weight loss, which again, I’ve explained previously.

How did it feel?

For the first week it was quite tough, I was getting pretty hungry around mid morning and consumed a lot of tea to keep myself full, which works a treat if you were wondering. Soon I became very used to not eating in the morning, especially if I was going to placement and not stopping until lunch time. This is when I noticed my first real issue with fasting. The post-fast insulin coma. I was having a massive lunch, to make up for not eating breakfast, usually with a lot of carbs to give me energy but come 3pm, I was exhausted. I was initially perplexed at this, but eventually realised it was because my body was so eager to start breaking down the carbs that it produced a heap of insulin to cope with the increased blood sugar levels, which increased my tiredness as the sugar levels dropped. On some days I experimented with eating lower carb, higher fat meals for lunch, or having lower GI carbs, both of which helped reduce the post-prandial crash. I also started having smaller lunch, not trying to play catch up, this also helped, and I would just snack throughout the afternoon on things like nuts and my homemade banana bread. The key for me was to eat healthily when I was, resisting the urge to just eat sugary or junk foods because I knew I’d feel much worse for it afterwards. I also found tracking my food in myfitnesspal very helpful to ensure I was getting enough food in me, without overdoing it.

The good things about fasting was that is is very convenient, you don’t need to allow time for breakfast in the morning, just grab a coffee or a tea and go. It would be a great way to lose weight because you make it through to lunch without eating and if you ate normally you’d be in a deficit, although it must be said I didn’t lose any significant amount of weight during the 50 days, but I was very conscious of eating enough. An unexpected benefit for me was even after I stopped fasting, I found that I was able to be much more flexible with my meal times. I used to always be hungry, i’d eat breakfast at 6-7am, have a snack at 9, and lunch at 12 then continue to snack until dinner, where I’d be insatiably hungry. Fasting helped me become much more able to be what we usually consider ‘hungry’ and just be okay with it, not needing to eat something immediately, helping me say goodbye to being hangry (most of the time). I think it’s worth just giving fasting a go for at least a week purely for this benefit alone, all other health benefits aside.

It wasn’t all good, when I began fasting I was not running very much, I was still in a stage of rehab where my sessions were <5km long and usually made up of intervals of easy running. I wasn’t expending a lot of energy during these and I would be doing them in the afternoons usually as well. A few issues began to arise when I started running more, six, seven or eight kilometres at a time and in the morning. I was feeling very flat and low in energy, with my heart rate much higher than it should have been. I soon came to the conclusion that I wouldn’t fast if I was training normally (which for me is >5hrs of exercise per week). Luckily, that nicely coincided with the 50 days of completed fasts. Over the past two weeks I’ve had a period of rest due to a recent niggle and a bruised knee and I’ve picked fasting back up. I noticed when I wasn’t training, I was still wanting to eat like I was training, so I figured fasting was a good way to kick myself back into gear and become okay with hunger again.

Overall I loved my time intermittent fasting, I think it’s a great strategy to be more conscious of what you eat, and when you eat it, as well as disrupt our psychological dependence on having constant access to food. I’m not going to continue it when I’m training a lot because it can be energy-sapping, especially when training in the morning, but for anyone completing <5hrs a week of training, I’d definitely recommend it to try, although it is crucial that you are eating sufficiently to fuel your training. Ultimately, it wasn’t life-changing, but wasn’t detrimental either and now I understand what it is like to fast, as well having picked up a few handy tricks along the way, and I’d recommend everyone try it, unless you’ve got a history of under fuelling your body, as an athlete or just as a person in general.

Writing Better: Why I choose to write in public

I have been writing a blog post every day up until the end of the year with the goal of improving my writing skills. The reason I started doing this was because I realised that I was not a very ‘correct’ writer; my writing had a lot of grammatical errors, and didn’t flow very nicely. I am very bullish on the value of being able to communicate well, it is one of the most characteristics that almost all successful people share. I am a natural talker (although this definitely needs work as well), but writing was letting me down, so I decided to practice it. I didn’t just want to write something for myself every day because I knew I wouldn’t do it, I wanted to practice in public. I got the idea of this from David Perell and it was a game changer for me. The pressure of knowing (hoping) that other people would read my writing naturally made me be more thoughtful about what I wrote, and put more effort into it.

When I started I was following the thought process which has been promoted by many people who are in the business of creating content, be it books, youtube videos, blogs, etc. That is, the mindset of “your first 100 pieces of writing are not going to be good anyway, so don’t try to perfect each blog post, just write, and accept that it won’t be good.” This was great in helping me form a habit of writing every day, which is arguably one of the hardest things about writing. I also think it helped me think more creatively in general; I had to write something, so I had to have an idea, not the other way around. If I had waited for an idea to come to be so I could write about it spontaneously, I would have written a grand total of 3-4 blog posts, and none of them would have been great.

I know that I haven’t written any blog posts that have been amazing, some are definitely more interesting than others, and some are flat out boring, but throughout the process, I have felt writing come easier and easier, which is conducive to writing more, which will result in improvements. Or will it?

One of my friends raised the point that he had some issues with the “just produce a lot of content” idea of improvement, because it means you become comfortable with producing bad work, rather than aiming for higher quality work. I realised I didn’t have a comeback remark to this, and it made me think whether my writing has actually improved. I just looked back at my first blog post of the 50 days and was expecting to be embarrassed by it, but it was actually quite good, and incidentally, my most liked blog post. I compared it to a recent blog post explaining cryptocurrency and I struggle to see where I have improved. This is a bit disheartening but I think it’s important to always try assess yourself to see if your efforts are actually paying off, and simply putting up a blog every day without too much regard for quality doesn’t seem to be working for me. So whats the plan?

I still want to improve my writing ability, so what I plan on doing is upping the level of ‘public’ I go. Currently I just upload every blog to my website, and share the occasional one to LinkedIn where I have a few friends and other people in the area of exercise physiology. I don’t have any of my supervisors or other people I would want to impress on there. So to help me improve my writing, I’m going to add all of those people I look up to on LinkedIn, as well as begin sharing my posts to Twitter, or even write tweet-storms about thoughts I’m having. I’ve told myself a million times I should not be worried about what other people think about my work, or me in general. Beginning to post more to these channels will be one way to achieve both goals, improving my writing, and getting out of my comfort zone.