"But test everything; hold fast what is good." (1 Thessalonians 5:21)
Measuring Faith
How do we measure something like faith? We can't simply put it on a scale or wrap measuring tape around it. Instead, like many disciplines in the social sciences such as psychology, sociology, and economics, we use surveys and behavioral data to measure important outcomes.
For example, you could take a survey with the question "How close do you feel to God right now, at this moment?" (1-7 scale). This gives us a rough idea of how strong your relationship with God feels to you today. Or if we wanted to measure how generous a church congregation is, we could measure how much they tithe each week.
Measuring faith-based outcomes is interesting, especially with large samples and good statistical principles. But measurement alone can't answer the more interesting questions of cause and effect. How can we strengthen our feelings of closeness to God, or how can pastors nudge their congregation to give more?
For example, you could take a survey with the question "How close do you feel to God right now, at this moment?" (1-7 scale). This gives us a rough idea of how strong your relationship with God feels to you today. Or if we wanted to measure how generous a church congregation is, we could measure how much they tithe each week.
Measuring faith-based outcomes is interesting, especially with large samples and good statistical principles. But measurement alone can't answer the more interesting questions of cause and effect. How can we strengthen our feelings of closeness to God, or how can pastors nudge their congregation to give more?
Experiments
To answer questions of cause and effect we use randomized controlled trials, also known as experiments. Experiments are considered the "gold standard" of research. They go beyond simply collecting data and measuring relationships. Instead, experiments allow us to test whether something actually causes an outcome, and if so, how big of an effect it has on that outcome.
Experiments allow us to answer questions like:
For example, in one experiment we found that a 1-minute prayer can increase feelings of closeness to God by about 15%. The experiment entailed randomly assigning 300 online survey takers to either write a prayer for 1 minute or write about their day for 1 minute, then measuring how close they felt to God with a survey scale. By having a neutral "control" condition (e.g., writing about one's day) and randomizing participants to either the control or prayer condition, we were able to test whether the prayer actually caused the increase in feelings of closeness to God.
Of course, experiments can't answer questions of "should." Even if praying didn't increase feelings of closeness to God, that doesn't mean we shouldn't pray. Rather, we can then turn our attention to understanding why it didn't (perhaps the prayer was too short), or how certain ways of praying may affect closeness to God (for example, typing vs. speaking).
Finally, even experiments can be misleading when poorly designed or executed. Ambiguous outcome measures, small sample sizes, non-representative participants or experimental conditions, and conflicts of interest can greatly reduce the quality of an experiment. That's why we follow as many scientific best practices as possible to ensure our research is of top quality.
Experiments allow us to answer questions like:
- Does praying reduce anxiety?
- When is the best time to read the Bible?
- Is it better to share my faith with a pamphlet?
For example, in one experiment we found that a 1-minute prayer can increase feelings of closeness to God by about 15%. The experiment entailed randomly assigning 300 online survey takers to either write a prayer for 1 minute or write about their day for 1 minute, then measuring how close they felt to God with a survey scale. By having a neutral "control" condition (e.g., writing about one's day) and randomizing participants to either the control or prayer condition, we were able to test whether the prayer actually caused the increase in feelings of closeness to God.
Of course, experiments can't answer questions of "should." Even if praying didn't increase feelings of closeness to God, that doesn't mean we shouldn't pray. Rather, we can then turn our attention to understanding why it didn't (perhaps the prayer was too short), or how certain ways of praying may affect closeness to God (for example, typing vs. speaking).
Finally, even experiments can be misleading when poorly designed or executed. Ambiguous outcome measures, small sample sizes, non-representative participants or experimental conditions, and conflicts of interest can greatly reduce the quality of an experiment. That's why we follow as many scientific best practices as possible to ensure our research is of top quality.
"Whatever you do, work at it with all your heart, as working for the Lord, not for people." (Colossians 3:23)
"Good" Science
Despite its reputation for rigor, science has taken a reputation hit in recent years. Studies have found that roughly 50% of papers published in "top-tier" peer-reviewed journals simply don't replicate or are greatly exaggerated (Bohannon, 2015). Biases in the publication process (Strang & Siler, 2015) and pressures for obtaining tenure and status (Chevassus-au-Louis, 2019) are just a few of the factors contributing to this problem.
To address these issues, we’ve adopted several research best practices:
Faith Research studies are typically run with about 400-600 people on academically vetted online platforms, including Prolific and Amazon Mechanical Turk (MTurk). Participants on these platforms tend to be reliable, diverse, and more representative of the U.S. population than many university labs (Berinsky, et al, 2012). In addition to online lab experiments, we also conduct research in real field settings like churches, homes, websites, and social media platforms.
Finally, our research is guided by one mission: To help you improve your Christian walk. By harnessing the power of rigorous social science, we hope to help as many people as possible fulfill the two great commandments: Love the Lord our God with all of our heart, soul, and mind, and love our neighbors as ourselves (Matthew 22:37-39).
To address these issues, we’ve adopted several research best practices:
- Report the results of all studies, regardless of outcomes
- Make available to you our research materials and data
- Recruit enough participants to detect small differences
- Replicate our own research using different participant pools
- Be explicit about the limitations of each study
Faith Research studies are typically run with about 400-600 people on academically vetted online platforms, including Prolific and Amazon Mechanical Turk (MTurk). Participants on these platforms tend to be reliable, diverse, and more representative of the U.S. population than many university labs (Berinsky, et al, 2012). In addition to online lab experiments, we also conduct research in real field settings like churches, homes, websites, and social media platforms.
Finally, our research is guided by one mission: To help you improve your Christian walk. By harnessing the power of rigorous social science, we hope to help as many people as possible fulfill the two great commandments: Love the Lord our God with all of our heart, soul, and mind, and love our neighbors as ourselves (Matthew 22:37-39).
References
Berinsky, A. J., Huber, G. A., & Lenz, G. S. 2012. Evaluating online labor markets for experimental research: Amazon.com's Mechanical Turk. Political Analysis, 20(3): 351-368.
Bohannon, J. 2015. Many psychology papers fail replication test. Science, 349(6251), 910-911.
Chevassus-au-Louis, N. 2019. Fraud in the Lab: The High Stakes of Scientific Research. Cambridge, MA: Harvard University Press.
Strang, D., & Siler, K. 2015. Revising as Reframing: Original Submissions versus Published Papers in Administrative Science Quarterly, 2005 to 2009. Sociological Theory, 33(1), 71–96.
Bohannon, J. 2015. Many psychology papers fail replication test. Science, 349(6251), 910-911.
Chevassus-au-Louis, N. 2019. Fraud in the Lab: The High Stakes of Scientific Research. Cambridge, MA: Harvard University Press.
Strang, D., & Siler, K. 2015. Revising as Reframing: Original Submissions versus Published Papers in Administrative Science Quarterly, 2005 to 2009. Sociological Theory, 33(1), 71–96.