oghaki is discussing:

The irony is that, if his thesis is correct, then you'll have a strong tendency to believe it, even if you don't possess, whether through deficiency in interpretation, understanding, or attentiveness, or in the rigor of the argument he presents, a rational basis for holding that belief.

Further, I don't doubt that there is a tendency similar to what he's suggesting in many cases. However, the framing seems a bit incoherent. For example, the majority of questions that might occur to us will likely ⅰ ) be contingent on variables we fail to consider, ⅱ )have coefficients to components (i.e., relative weighting of parameters) we do consider that we're not competent to estimate with sufficient precision to allow us to confidently assess the question, even in the event that we divine the relationship between the quantity representing the answer to our question and a set of variables which determine overwhelmingly that quantity (i.e., even if other variables upon which the quantity is contingent are ignored), or ⅲ ) depend on variables which, even if we identify them and their relationship to the answer, we're unable to instantiate them with sufficient precision because we don't have, and are we able to measure, their values. The reason for these deficiencies can range from the question's being fundamentally unanswerable in principle, the question's requiring impractical means to model or measure, or the question's merely falling outside of our own competence, whether because we don't possess a sufficient capacity for reason to model it, or because, we're largely or entirely ignorant of variable values, even in the case that accurate values exist of which an expert might be aware. Already, the vast majority of conceivable questions are likely either fundamentally or practically unanswerable, or are complex and highly inductive, so that, even a person who devotes substantial time and resources towards studying the associated subject matter will not have a rational basis for a belief that deviates much from 0.5 (i.e., random). Some exceptions include ⅰ ) math / logic, which are deductive and entirely non-empirical, and ⅱ ) physics, which, while empirical, effectively represents our attempt at maximum rigor on physical philosophy, and, to a lesser degree, ⅲ ) the other natural sciences, allowing for stronger rational belief inasmuch as they rely on the scientific method (i.e., experiments) to generate knowledge, those experiments are accurate analogs of the phenomenon being studied, and we're able to measure and control all variables upon which the outcome is dependent. Any field of study which does not rely on experiment (in many studies, experiment is precluded for fundamental or practical reasons, e.g., we can't' create two economies which are identical except for one variable that are also sufficiently large and complex to function as a reasonable proxy for a real national economy, nor can we create two identical planets that vary by a single variable to study its effect on weather), but which addresses empirical questions (i.e., questions that are not purely abstract, normative, or aesthetic), can produce a rational basis for belief (i.e., a deviation away from 0.5) that is multiple orders of magnitude less than what is provided by experiment, because they contain no tools which can provide rational confidence in assessing a causal relationship (which is the foundation for all understanding) (the replication crisis makes a lot more sense in consideration of this). Thus, even most experts can expect little more than extremely modest deviations from the level of confidence one might have in predicting the outcome of an ideal coin toss.

Thus, the rational position for the bulk of questions is, by definition, something close to total ambiguity between two choices. However, Shermer seems to be suggesting that we can have a rational preference for a certain position that is based on the aspects or results of some kind of democratic assessment of questions that actually get transmitted to us, e.g. what news stations tell us about what scientists think. Of course, no question is contingent on its assessment, even if the assessor is the most remarkable among us, because the causal relationship between a question and the outcome of its assessment flows in the opposite direction. Indeed, we might find that a correlation exists, assuming accurate information about the assessments of those remarkable experts is not corrupted so badly through transmission that it becomes misleading or unusable, but such an analysis does not involve a single variable upon which the question being assessed depends, and so provides us no actual rational basis for holding a non-trivial believe about the actual problem (unfortunately, rational belief is an objective phenomenon, and it comes in equal measure to the actual understanding we work to acquire). Hence, being a 'Skeptic', assuming the title refers to something similar to the meaning of the English word, should really refer to acknowledgement of ambiguity and an exhibition of the modesty which such acknowledgement rationally demands (note, this applies only to questions insofar as they are subject to rational assessment—irrational aspects of questions, where belief is only possible through faith, such as the axioms which underlie any assessment, beliefs about metaphysics, or a belief in a god, are entirely outside of the scope of this analysis).