Updating The General Practitioner on The Association Between Teeth Loss and Temporomandibular Disorders: A Systematic Review

The belief about a possible association between the absence of one or more teeth and the presence of temporomandibular disorders (TMD), although old, is still present among the dental class. Although evidence points to a lack of association between loss of posterior support and the presence of TMD, we do not have critical studies on the extent, quantity, or location of these losses. In this sense, this systematic review aims to investigate the association between tooth loss and the presence of TMD signs or diagnostic subgroups. Search strategies using a combination of keywords tooth loss and TMDs were performed in six databases (PubMed, Embase, Web of Science, Livivo, Lilacs, and Scopus) and gray literature from August to September 2020. Observational studies that investigated the association between tooth loss in TMD were considered. The risk of bias was assessed using the Joanna Briggs Institute (JBI) Critical Assessment Checklist for cross-sectional analytical studies, case–control, and cohort studies. Finally, the level of certainty measured by the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) was assessed. Six articles were included in the review according to the eligibility criteria. Of these, five had a high risk of bias and one had a moderate risk. Only one study showed an association between the loss of posterior teeth and the presence of joint sounds and joint pain, the others found no significant association with sign or TMD subgroups diagnostic. There is no scientific evidence to support the association between one or more tooth loss and the presence of TMD signs and symptoms or diagnostic subgroups.


Rationale
3 Describe the rationale for the review in the context of existing knowledge.
1 Objectives 4 Provide an explicit statement of the objective (s) or question(s) the review addresses. 2

METHODS
Eligibility criteria 5 Specify the inclusion and exclusion criteria for the review and how studies were grouped for the syntheses. 3 Information sources 6 Specify all databases, registers, websites, organizations, reference lists and other sources searched or consulted to identify studies. Specify the date when each source was last searched or consulted.
3 Search strategy 7 Present the full search strategies for all databases, registers and websites, including any filters and limits used.

S2 Table
Selection process 8 Specify the methods used to decide whether a study met the inclusion criteria of the review, including how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process.

4
Data collection process 9 Specify the methods used to collect data from reports, including how many reviewers collected data from each report, whether they worked independently, any processes for obtaining or confirming data from study investigators, and if applicable, details of automation tools used in the process. 4 Data items 10a List and define all outcomes for which data were sought. Specify whether all results that were compatible with each outcome domain in each study were sought (e.g. for all measures, time points, analyses), and if not, the methods used to decide which results to collect.

10b
List and define all other variables for which data were sought (e.g. participant and intervention characteristics, funding sources).
Describe any assumptions made about any missing or unclear information. 4 Study risk of bias assessment 11 Specify the methods used to assess risk of bias in the included studies, including details of the tool(s) used, how many reviewers assessed each study and whether they worked independently, and if applicable, details of automation tools used in the process. Describe any methods used to explore possible causes of heterogeneity among study results.

NA 13f
Describe any sensitivity analyses conducted to assess robustness of the synthesized results.

NA
Reporting bias assessment 14 Describe any methods used to assess risk of bias due to missing results in a synthesis (arising from reporting biases).

Certainty assessment 15
Describe any methods used to assess certainty (or confidence) in the body of evidence for an outcome.

Study selection 16a
Describe the results of the search and selection process, from the number of records identified in the search to the number of studies included in the review, ideally using a flow diagram (see Figure 1).

Figure 1
16b Cite studies that met many but not all inclusion criteria ('near-misses') and explain why they were excluded.
S4 Table   Study characteristics  17 Cite each included study and present its characteristics. Table 1 Risk of bias in studies 18 Present assessments of risk of bias for each included study. Table 2, Table  3, Table 4 Results of individual studies 19 For all outcomes, present, for each study: (a) summary statistics for each group (where appropriate) and (b) an effect estimate and its precision (e.g. confidence/credible interval), ideally using structured tables or plots. Table 1 Results of syntheses 20a For each synthesis, briefly summarise the characteristics and risk of bias among contributing studies.

20b
Present results of all statistical syntheses conducted. If meta-analysis was done, present for each the summary estimate and its precision (e.g. confidence/credible interval) and measures of statistical heterogeneity. If comparing groups, describe the direction of the effect. Discuss any limitations of the review processes used.

23d
Discuss implications of the results for practice, policy, and future research.

OTHER INFORMATION
Registration and protocol 24a Provide registration information for the review, including register name and registration number, or state that the review was not registered.

1/3 24b
Indicate where the review protocol can be accessed, or state that a protocol was not prepared.

24c
Describe and explain any amendments to information provided at registration or in the protocol 3 Support 25 Describe sources of financial or non-financial support for the review, and the role of the funders or sponsors in the review.

12
Competing interests 26 Declare any competing interests of review authors.
Check the paper carefully for descriptions of participants to determine if patients within and across groups have similar characteristics in relation to exposure (e.g., risk factor under investigation). The two groups selected for comparison should be as similar as possible in all characteristics except for their exposure status, relevant to the study in question. The authors should provide clear inclusion and exclusion criteria that they developed prior to recruitment of the study participants.
2. Were the exposures measured similarly to assign people to both exposed and unexposed groups?
A high quality study at the level of cohort design should mention or describe how the exposures were measured. The exposure measures should be clearly defined and described in detail. This will enable reviewers to assess whether or not the participants received the exposure of interest.
3. Was the exposure measured in a valid and reliable way?
The study should clearly describe the method of measurement of exposure. Assessing validity requires that a 'gold standard' is available to which the measure can be compared. The validity of exposure measurement usually relates to whether a current measure is appropriate or whether a measure of past exposure is needed. Reliability refers to the processes included in an epidemiological study to check repeatability of measurements of the exposures. These usually include intra-observer reliability and inter-observer reliability.

Were confounding factors identified?
Confounding has occurred where the estimated intervention exposure effect is biased by the presence of some difference between the comparison groups (apart from the exposure investigated/of interest). Typical confounders include baseline characteristics, prognostic factors, or concomitant exposures (e.g., smoking). A confounder is a difference between the comparison groups and it influences the direction of the study results. A high quality study at the level of cohort design will identify the potential confounders and measure them (where possible). This is difficult for studies where behavioral, attitudinal or lifestyle factors may impact on the results.

5.
Were strategies to deal with confounding factors stated?
Strategies to deal with effects of confounding factors may be dealt within the study design or in data analysis. By matching or stratifying sampling of participants, effects of confounding factors can be adjusted for. When dealing with adjustment in data analysis, assess the statistics used in the study. Most will be some form of multivariate regression analysis to account for the confounding factors measured. Look out for a description of statistical methods as regression methods such as logistic regression are usually employed to deal with confounding factors/variables of interest.
6. Were the groups/participants free of outcome at the start of the study (or at the moment of exposure)?
The participants should be free of the outcomes of interest at the start of the study. Refer to the 'methods' section in the paper for this information, which is usually found in descriptions of participant/sample recruitment, definitions of variables, and/or inclusion/exclusion criteria.
7. Were the outcomes measured in a valid and reliable way?
Read the methods section of the paper. If for e.g., lung cancer is assessed based on existing definitions or diagnostic criteria, then the answer to this question is likely to be yes. If lung cancer is assessed using observer reported, or self-reported scales, the risk of over-or under-reporting is increased, and objectivity is compromised. Importantly, determine if the measurement tools used were validated instruments as this has a significant impact on outcome assessment validity.
Having established the objectivity of the outcome measurement (e.g., lung cancer) instrument, it's important to establish how the measurement was conducted. Were those involved in collecting data trained or educated in the use of the instrument/s? (e.g., radiographers). If there was more than one data collector, were they similar in terms of level of education, clinical or research experience, or level of responsibility in the piece of research being appraised?
8. Was the follow-up time reported and sufficient to be long enough for outcomes to occur?
The appropriate length of time for follow up will vary with the nature and characteristics of the population of interest and/or the intervention, disease or exposure. To estimate an appropriate duration of follow up, read across multiple papers and take note of the range for duration of follow up. The opinions of experts in clinical practice or clinical research may also assist in determining an appropriate duration of follow up. For example, a longer timeframe may be needed to examine the association between occupational exposure to asbestos and the risk of lung cancer. It is important, particularly in cohort studies that follow up is long enough to enable the outcomes. However, it should be remembered that the research question and outcomes being examined would probably dictate the follow up time. It is important in a cohort study that a greater percentage of people are followed up. As a general guideline, at least 80% of patients should be followed up. Generally a dropout rate of 5% or less is considered insignificant. A rate of 20% or greater is considered to significantly impact on the validity of the study. However, in observational studies conducted over a lengthy period of time a higher dropout rate is to be expected. A decision on whether to include or exclude a study because of a high dropout rate is a matter of judgement based on the reasons why people dropped out, and whether dropout rates were comparable in the exposed and unexposed groups.
Reporting of efforts to follow up participants that dropped out may be regarded as an indicator of a well conducted study. Look for clear and justifiable description of why people were left out, excluded, dropped out etc. If there is no clear description or a statement in this regards, this will be a 'No'.
10. Were strategies to address incomplete follow-up utilized?
Some people may withdraw due to change in employment or some may die; however, it is important that their outcomes are assessed. Selection bias may occur as a result of incomplete follow up. Therefore, participants with unequal follow up periods must be taken into account in the analysis, which should be adjusted to allow for differences in length of follow up periods. This is usually done by calculating rates which use person-years at risk, i.e. considering time in the denominator.
11. Was appropriate statistical analysis used?
As with any consideration of statistical analysis, consideration should be given to whether there was a more appropriate alternate statistical method that could have been used. The methods section of cohort studies should be detailed enough for reviewers to identify which analytical techniques were used (in particular, regression or stratification) and how specific confounders were measured. For studies utilizing regression analysis, it is useful to identify if the study identified which variables were included and how they related to the outcome. If stratification was the analytical approach used, were the strata of analysis defined by the specified variables? Additionally, it is also important to assess the appropriateness of the analytical strategy in terms of the assumptions associated with the approach as differing methods of analysis are based on differing assumptions about the data and how it will respond.
Explanation of case-control studies critical appraisal 1. Were the groups comparable other than presence of disease in cases or absence of disease in controls?
The control group should be representative of the source population that produced the cases. This is usually done by individual matching; wherein controls are selected for each case on the basis of similarity with respect to certain characteristics other than the exposure of interest. Frequency or group matching is an alternative method. Selection bias may result if the groups are not comparable.

Were cases and controls matched appropriately?
As in item 1, the study should include clear definitions of the source population. Sources from which cases and controls were recruited should be carefully looked at. For example, cancer registries may be used to recruit participants in a study examining risk factors for lung cancer, which typify population-based case-control studies. Study participants may be selected from the target population, the source population, or from a pool of eligible participants (such as in hospital-based case-control studies).
3. Were the same criteria used for identification of cases and controls?
It is useful to determine if patients were included in the study based on either a specified diagnosis or definition. This is more likely to decrease the risk of bias. Characteristics are another useful approach to matching groups, and studies that did not use specified diagnostic methods or definitions should provide evidence on matching by key characteristics. A case should be defined clearly. It is also important that controls must fulfil all the eligibility criteria defined for the cases except for those relating to the diagnosis of disease.
4. Was exposure measured in a standard, valid, and reliable way?
The study should clearly describe the method of measurement of exposure. Assessing validity requires that a "gold standard" is available to which the measure can be compared. The validity of exposure measurement usually relates to whether a current measure is appropriate or whether a measure of past exposure is needed. Case-control studies may investigate many different "exposures" that may or may not be associated with the condition. In these cases, reviewers should use the main exposure of interest for their review to answer this question when using this tool at the study level. Reliability refers to the processes included in an epidemiological study to check repeatability of measurements of the exposures. These usually include intraobserver reliability and interobserver reliability.
5. Was exposure measured in the same way for cases and controls?
As in item 4, the study should clearly describe the method of measurement of exposure. The exposure measures should be clearly defined and described in detail. Assessment of (Continued) Confounding has occurred where the estimated intervention exposure effect is biased by the presence of some difference between the comparison groups (apart from the exposure investigated/of interest). Typical confounders include baseline characteristics, prognostic factors, or concomitant exposures (e.g., smoking). A confounder is a difference between the comparison groups and it influences the direction of the study results. A high quality study at the level of case-control design will identify the potential confounders and measure them (where possible). This is difficult for studies where behavioral, attitudinal, or lifestyle factors may impact on the results.
7. Were strategies to deal with confounding factors stated?
Strategies to deal with effects of confounding factors may be dealt within the study design or in data analysis. By matching or stratifying sampling of participants, effects of confounding factors can be adjusted for. When dealing with adjustment in data analysis, assess the statistics used in the study. Most will be some form of multivariate regression analysis to account for the confounding factors measured. Look out for a description of statistical methods as regression methods such as logistic regression are usually employed to deal with confounding factors/ variables of interest.
8. Were outcomes assessed in a standard, valid and reliable way for cases and controls?
Read the methods section of the paper. If for e.g., lung cancer is assessed based on existing definitions or diagnostic criteria, then the answer to this question is likely to be yes. If lung cancer is assessed using observer reported, or self-reported scales, the risk of over-or under-reporting is increased, and objectivity is compromised. Importantly, determine if the measurement tools used were validated instruments as this has a significant impact on outcome assessment validity.
Having established the objectivity of the outcome measurement (e.g., lung cancer) instrument, it's important to establish how the measurement was conducted. Were those involved in collecting data trained or educated in the use of the instrument/s? (e.g., radiographers). If there was more than one data collector, were they similar in terms of level of education, clinical or research experience, or level of responsibility in the piece of research being appraised?
9. Was the exposure period of interest long enough to be meaningful?
It is particularly important in a case-control study that the exposure time is sufficient enough to show an association between the exposure and outcome. It may be that the exposure period may be too short or too long to influence the outcome.
10. Was appropriate statistical analysis used?
As with any consideration of statistical analysis, consideration should be given to whether there was a more appropriate alternate statistical method that could have been used. The methods section should be detailed enough for reviewers to identify which analytical techniques were used (in particular, regression or stratification) and how specific confounders were measured. For studies utilizing regression analysis, it is useful to identify if the study identified which variables were included and how they related to the outcome. If stratification was the analytical approach used, were the strata of analysis defined by the specified variables? Additionally, it is also important to assess the appropriateness of the analytical strategy in terms of the assumptions associated with the approach as differing methods of analysis are based on differing assumptions about the data and how it will respond. Explanation of analytical cross sectional studies critical appraisal 1. Were the criteria for inclusion in the sample clearly defined?
The authors should provide clear inclusion and exclusion criteria that they developed prior to recruitment of the study participants. The inclusion/exclusion criteria should be specified (e.g., risk, stage of disease progression) with sufficient detail and all the necessary information critical to the study.
2. Were the study subjects and the setting described in detail?
The study sample should be described in sufficient detail so that other researchers can determine if it is comparable to the population of interest to them. The authors should provide a clear description of the population from which the study participants were selected or recruited, including demographics, location, and time period.
3. Was the exposure measured in a valid and reliable way?
The study should clearly describe the method of measurement of exposure. Assessing validity requires that a "gold standard" is available to which the measure can be compared. The validity of exposure measurement usually relates to whether a current measure is appropriate or whether a measure of past exposure is needed. Reliability refers to the processes included in an epidemiological study to check repeatability of measurements of the exposures. These usually include intraobserver reliability and interobserver reliability.
4. Were objective, standard criteria used for the measurement of condition?
It is useful to determine if patients were included in the study based on either a specified diagnosis or definition. This is more likely to decrease the risk of bias. Characteristics are another useful approach to matching groups, and studies that did not use specified diagnostic methods or definitions should provide evidence on matching by key characteristics.

Were confounding factors identified?
Confounding has occurred where the estimated intervention exposure effect is biased by the presence of some difference between the comparison groups (apart from the exposure investigated/of interest). Typical confounders include baseline characteristics, prognostic factors, or concomitant exposures (e.g., smoking). A confounder is a difference between the comparison groups and it influences the direction of the study results. A high quality study at the level of cohort design will identify the potential confounders and measure them (where possible). This is difficult for studies where behavioral, attitudinal, or lifestyle factors may impact on the results. 6. Were strategies to deal with confounding factors stated?
Strategies to deal with effects of confounding factors may be dealt within the study design or in data analysis. By matching or stratifying sampling of participants, effects of confounding factors can be adjusted for. When dealing with adjustment in data analysis, assess the statistics used in the study. Most will be some form of multivariate regression analysis to account for the confounding factors measured.
7. Were the outcomes measured in a valid and reliable way?
Read the methods section of the paper. If for e.g., lung cancer is assessed based on existing definitions or diagnostic criteria, then the answer to this question is likely to be yes. If lung cancer is assessed using observer reported, or self-reported scales, the risk of over-or under-reporting is increased, and objectivity is compromised. Importantly, determine if the measurement tools used were validated instruments as this has a significant impact on outcome assessment validity.
Having established the objectivity of the outcome measurement (e.g., lung cancer) instrument, it is important to establish how the measurement was conducted. Were those involved in collecting data trained or educated in the use of the instrument/s? (e.g., radiographers). If there was more than one data collector, were they similar in terms of level of education, clinical or research experience, or level of responsibility in the piece of research being appraised?
8. Was appropriate statistical analysis used?
As with any consideration of statistical analysis, consideration should be given to whether there was a more appropriate alternate statistical method that could have been used. The methods section should be detailed enough for reviewers to identify which analytical techniques were used (in particular, regression or stratification) and how specific confounders were measured. For studies utilizing regression analysis, it is useful to identify if the study identified which variables were included and how they related to the outcome. If stratification was the analytical approach used, were the strata of analysis defined by the specified variables? Additionally, it is also important to assess the appropriateness of the analytical strategy in terms of the assumptions associated with the approach as differing methods of analysis are based on differing assumptions about the data and how it will respond.
Supplementary Material References of Excluded articles (listed in table S4)