INTERLNG Archives

Discussiones in Interlingua

INTERLNG@LISTSERV.ICORS.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
STAN MULAIK <[log in to unmask]>
Reply To:
INTERLNG: Discussiones in Interlingua
Date:
Wed, 24 Dec 1997 15:33:14 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (534 lines)
Sorry to not have responded sooner, but our institution's server was down
for the past three days for upgrading:


>
>Stan said:
>>So, you would tend, as do many sociologists, to a "post-modernist" position
>>which regards rhetoric as more fundamental than truth?
>
>        No. Here I hold a "middle of the road" position. I am equally
>appalled by some of the post-modernist stuff that makes discourse
>everything, and some of the opposition stuff that makes "discourse count for
>nothing."

I am relieved to know this. I would take a similar position.

>
>>Whether the [deconstructionist] ploy succeeds depends on the extent to which
>>its recipient
>>ignores the fact that those who use it, presume they have certain >knowledge
>>about social relations applicable to accounts of scientific activity
>>that is beyond the range of the deconstructionist guns they use on the
>>physical scientists.
>
>      I do not see it as a pure or entire "ploy" -- there is some substance.
>To me, this says: If "they" have what to me is an unquestionable way to get
>to truth (beyond the range of their own guns as well as mine), then they can
>legitimately question my truth, but until then I have truth. So "I" have
>truth and not they, until  they are free of reflexivity problems.
>

My thought would be that they would have to appeal to some principle of
objectivity to justify that knowledge, so if they attack the physicists who
emphasize objectivity, that same attack can be directed on them as well.
On the other hand, it would be appropriate to attack the physicists insofar
as their physical accounts are distorted or blinded by certain prejudices
or preconceptions, or social biases. But then that would conform to the
physicists own proscriptions against letting one's subjective biases
distort the objective reality one is seeking. That is routine physics.

>(snip)
>>But in that case, then calling a SEM model "fiction" doesn't quite apply.
>>My point about "fiction" is that the concept presupposes the concept of
>>"truth", for "fiction" is not "truth".
>
>        Not to a "constructionist". Try Newtonian physics as truth. This
>will not stand. Try it as fiction. This will not stand either. It contains
>idealized components whose "untrue" status is conceivable only in extremes
>(say of velocity). I am using this here to merely illustrate that your
>thinking of these as unmixable polar opposites is unhelpful. They may be
>mixable, even if there is no obvious  "progression" from one to the other.

I do not see "fiction" as the polar opposite of "truth". But it is a concept
that presupposes one's understanding of the concepts of truth or reality, and
that the fictional account, insofar as it is fiction is not designed to
represent a specific reality or a true situation. And "fiction" can only be
used in situations where one can at the same time identify something as
true in contrast to the fiction. It cannot be used if you say
that everything is a fiction and nothing is true.


>> But if as I understand your use
>>of "all models are fictions", this borders on leaving no room for a use
>>for "truth" in connection with models, and maybe with anything else.
>
>        I would say models as containing both fiction/construction AND
>constraint--section 1.3 did follow upon, and expand upon, sections 1.1 and 1.2.

Rather than sending everyone scurrying to read your book, suppose you
summarize your points here.

Then are you saying that "constraint" is not fiction?


>        But more to the point,  there are specific places where you appeal
>to knowing that some specific model is true. This is the style of thinking
>that is REQUIRED for, but that also got you in trouble in, the four-step in
>the context of claiming that passing a step-1 fit test tells us we have the
>PROPER number of factors--so that the number of factors is a truth we can
>know from a factor model. Not  "failing to reject," but knowing we have the
>proper number. This was the only kind of assertion that would keep the rest
>of the  multi-step from disintegrating. So yes, I see your need (not mine)
>for some definite and locatable truth to specific models.
>        Note also your selective dedication to truth as being determinable.
>In the context of whether all step-1 chisquare ill-fits called for more
>factors, you turned to "but models are only approximation"--so the error
>covariance is not a "real countable factor" (because we have a story/fiction
>of it as probably/hopefuly/wouldn't it be convenient if it were--
>"methodology"). And we could/should use an index of CLOSE fit as opposed to
>an index of exact fit.  Close fit is good enough (to get to where, to
>truth?), and some factors are not "important enough" (to be what, "true"
>factors?).


You would like to pin on me the idea that the four-step procedure
determines "Truth", i.e. a final, unassailable, incorrigibile, absolute
truth.  Sorry. That's not my view of what this procedure yields. The
procedure evaluates sources of possible lack of fit in a structural
equation model. It sees that a series of constraints may be successively
imposed on a SEM model frame to generate a nested series of models that
eventually produce the SEM model in question. If preceding, less constrained
models in the series demonstrate significant lack of fit, that indicates
that already some of the SEM model's constraints are at fault and would
contribute to lack of fit even in the more constrained SEM model. But
at least we can isolate certain constraints as at fault by noting that at
a certain point in the series, when certain additional constraints
were imposed, serious lack of fit arose.

For readers who have tuned in just recently, the four-step procedure involved
a nested sequence of models, constructed from the SEM model under
consideration, as less to more constrained versions of the SEM model. The
initial model is an unrestricted model, which, for SEM models that have
enough indicators per latent variable to allow one to perform a common
factor analysis, is a common factor model with K common factors equal
in number to the number of latent variables of the SEM model, but with
only the minimum number of constraints imposed to achieve a unique rotation.
It is known that its fit is equivalent to that of an exploratory factor analysis
model with the same number of factors. In this sense, it is like a test of
the hypothesis about the number of latent variables. Because the fit is
the same for all rotations, it is not a test about the specific relations
between the manifest indicators and the latents.  If you get poor fit
at this point, then you may have problems. To proceed further with more
constrained versions will still carry this initial poor fit  to be combined
with whatever additional poor fit results from misspecifications of these
additional constraints. Things won't get better, only worse.

If the unrestricted model fits, one can go to the next step (either
directly or via one-parameter at a time constraints of a series of nested
models) to the "measurement model", which is a confirmatory factor analysis
model with zero loadings of indicators on those factors they are NOT
indicators of. Correlations among factors remain free. Presuming you
are confident in your theory's stipulation of what indicators depend on
what latent variables, and you get acceptable fit for this model, you can
go on to the next step to impose constraints on the relations between the
latent varibles--to your SEM model.

Step Three:  Test the SEM model with constraints on the relations between
the latent variables, e.g., some latents are not causes of other latents.
If this model gets acceptable fit, you can then go on to step Four:

Step Four:  Impose constraints on the remaining free parameters and test
for fit. Several ways of proceeding:  (1) testing that a parameter
corresponding to a free parameter is zero is equivalent to fixing it to
zero. So, testing that free parameters are zero (and hoping to reject Ho)
can be done at this point, perhaps with Bonferroni adjusted levels of
significance, since these tests will not be independent. If you held
implicitly that the free parameters corresponded to nonzero values, this
is the point at which you could test that hypothesis by testing that they
are zero.  But generally, remember that freeing a parameter is not the
same as specifying a hypothesis about it.  (2) Test the hypothesis that
freed parameters equal some specified nonzero values (again using
Bonferroni adjustments for the significance level).  (3)  Use a nested
sequence of constraints on the remaining free parameters to achieve
independent tests (as long as you keep accepting the null hypothesis).

Whatever decisions you make at any of these steps are provisional and
revisable. We are not achieving final TRUTH.  (Somehow this is all an
echo of R. A. Fisher).

>(snip)

>>>There remains a discussion-to-be-had over whether covariances are "facts."
>>>Would you care to take a side/slant/position on this?
>>Apres vous, Gaston.  You brought it up, so what is your point?
>
>        My point is that IF covariances are NOT facts for you, then you are
>lacking a  touchstone that can get you to truth. How could a model be shown
>true, if there were no solidity to the covariances as facts? I can permit
>that variability in the covariances due to, for example, an utter
>fabrication of part of the data (say a subject makes up an answer) is still
>data, even if it is not fact. If models (not the data) are "non-factual,"
>the error-parts of models (thetas) are non-factual. So it would be strange
>to hear that the non-factual stuff of the error-parts of models can
>magically turn non-factual covariance matrices into real true  "fact stuff"
>by an appropriate error-model  compensation. If the data is not "factual"
>without the help of some part of a fiction-containing-model, or without the
>aid of some conceputualization,  then its status as "fact" is too weak to be
>a touchstone for truth.
>

Your point is still somewhat opaque here; I would need more clarification
of what you are getting at.  My concern with "data" in science is that
what we test our theories against cannot be accepted for that task uncritically.
Raw data are not "facts" automatically. We have to understand the conditions
under which they are obtained and their relevance to testing our model.
Covariances being derivative from "raw data" are already somewhat removed
from the original data. But further, since they represent second moments,
we already impose a structure onto the raw data in the process of obtaining
them, and we must justify how the result is appropriate as evidence against
which to compare our theoretical predictions. Variances and covariances can
be appropriate moments for evaluating statistical theories, if we can assume
the data is multivariate normal, for these are the parameters of such a
distribution. In that case the covariances can be "facts" on which to
arrive at some judgment about a theory. Otherwise they might not be appropriate.


>(snip) The "would you like to take a
>>>slant/side/position" above will push us towards meeting old friends if the
>>>only  way for you to speak of this is with a factor-analysis-first.
>(snip)
>
>>The best way I can extend this analogy to factor analysis is to consider
>>that one establishes via factor analysis a common factor that one does
>>not think on the basis of the indicators  of it are measures of, say,
>>general intelligence.  But when one joins them with other indicators of
>>general intelligence, one discovers that they all share a common factor
>>rather than two factors, and then on closer inspection of the initial
>>indicators, one discovers a reason why the initial indicators were
>>afterall, measures of general intelligence.
>
>        I recognize this as a diminuative case of  the four-step discussion
>where the researcher postulates K factors but  K-1 factors ALSO fit. The
>above sets  K=2.  The above context is too specific to be of much help, so I
>will make some comments more appropriate to a larger K in the context of the
>four-step. The problems connected to "truth" were two fold: (1) In the
>four-step there was a base model that was believed to be true according to
>the literature. If the expansion of this model led to the step-1 factor
>model and both this model and a model with one fewer factor fit, why should
>we discarded the original model? If one factor fits, two proably will also
>fit, so 2 might still be the true model.

In the history of science situations like this were usually resolved in favor
of the more parsimonious model, which was more parsimonious by estimating
fewer parameters.

> (2) Is a model with fewer factors
>necessarily a more parsimonious model? How does one count structural
>sparseness versus minimizing the number of variables/factors?  This issue is
>not obvious in your 2 factor to 1 factor example, but this is relevant in
>the larger context context. "God surely wouldn't do parsimony  THAT way..."
>is a truth claim.

Parsimony is not about the number of variables but the number of estimated
parameters relative to the number of data points. The fewer you estimate,
the more parsimonious is the model.

But I admit that for centuries there has been confusion about this.
Generally when you hypothesize fewer (latent) variables, you have fewer
parameters to estimate involving relations between the latents and between
the latents and the manifest variables.  But you can get  a highly parsimonious model with many latent variables if you fix lots of  parameters a priori in your model. It is important to realize that the goal is testing a hypothesis and
estimating parameters only arises because we have incomplete hypotheses
(not all parameters are prespecified by hypothesis) and unspecified
parameters are "nuisance parameters" that must be estimated so that values can
be available for reproducing the data from the model.  These estimated
values must not be the source of lack of fit, so they are estimated to
give the best fit conditional on the constraints of prespecified parameters.
But in models that potentially involve more parameters than data points,
some parameters must be prespecified to achieve identification (i.e. make
it possible to achieve unique estimates of the unspecified parameters),
while additional parameters must be prespecified to achieve overidentification,
and a possibility of lack of fit. When models are identified, the dimensionality
of the residual space in which the reproduced data are free to differ from
the observed variables is equal to the number of original data points minus
the number of estimated parameters. So parsimonious models yield more conditions
by which the model may be disconfirmed by lack of fit.

Which brings up the possibility of a STEP 0:  Is the number of latents
K-1?  Being more parsimonious, (at least in terms of having fewer covariances
between latent variables to estimate) it has more constraints imposed. So, we
might be able to reject this model and comfortably proceed then with the four
steps.  If we accept this hypothesis, then we need to seriously reconsider
our initial theory on grounds of parsimony.  (Or maybe something else is
wrong).

>        Now back to the above, taking a different slant. Note that your "one
>discovers a reason" is an open invitation for a constructionist to point
>to: convincingness, reasonableness, what we all know, and what everyone or
>most people/scientists would agree to, ----- in short to a socialially
>maintained world of practicing scientists. (See Harold Garfinkel. "Studies
>of the routine grounds of everday activities." Chapter 2 in Studies in
>Ethnomethodology, 1967; which originally appeared as an article in Social
>Problems in 1964). This appeal makes it difficult for truth to be
>determinable or touchable for THIS model. Yes you went on to say "then test
>it," but the next test is plagued with exactly the same form of challenge,
>so it will also be found to be not convincing in and of itself. So we point
>to yet a further test or investigation and ........ "truth" is always one
>step away from our current model. This makes a great story to tell the
>granting agencies when one wants "more money" for that next step.

You're preaching to the choir, Les. Besides being a fan of Kant, I'm a
fan of Wittgenstein, and Wittgenstein makes you acutely aware of the
rules you function with, and rules in this sense are normative, social
in nature.  I've also read works on the sociology of knowledge and science
(e.g. Bloor, Latour, etc..). I have already published the idea that
science is a social practice governed by rules (like language). But
implicit in Wittgenstein's analysis of following a rule is the idea that
there must be an objective way to determine if one is following a rule
properly--otherwise it is not following a rule.


>        Note the parallel here to experimentation. It was maintained for a
>time that an experiment (including psychological experiments) could be
>conclusive. There are now journals whose letters to reviers inform reviewers
>that they as reviewers are NOT to claim, or belive claims,  that the
>manuscript they are reviewing demonstrates anything conclusively (read true)
>with an experiment. Experiments are in some ways stronger than SEM, and in
>other ways weaker. Perhaps we need a similar warning to the AUTHORS of
>papers SEM papers, and reviewers. Do not claim, or believe claims of, models
>as truths. Demostrably (to the satisfaction of other scientists like us)
>compatible with specific consistency claims....OK. True?

I do not regard "truth" as appropriately used in the way here criticized.
Truth in science is not some conclusive determination.

>
>> And you can check this out
>>further by adding in more indicators of general intelligence and doing
>>a confirmatory analysis.
>
>        Just to balance the "shots on goal" (if you are into hockey
>metaphors) --- Or by adding one single cause of intelligence, with a single
>good indicator.


"cause of intelligence" or "effect of intelligence"? Clarify. Intelligence
may have multiple causes, none easily measured.  It is relatively easy to
construct effect indicators of intelligence, once you have the rule that
intelligence tests measure the degree to which an individual is capable of
infering the culling rules of relations from instances of items or item
pairs in the relation.

But the issue of one additional indicator is whether you have pinned down
the existence and identity of a node in the causal graph.

>
>>[snip]
>
>        Yes, there are philosphies attached to this. I cited a Garfinkel
>piece above, before I had read this far into your posting Stan. I think my
>choice was appropriate..... Garfinkel's piece begins....  "For Kant the
>moral order "within" was an awesome mystery; for sociologists the moral
>order "without" is a technical mystery."

I'm not sure what Garfinkel is talking about with respect to Kant. Do you
know?

>        The appeal to judges reasoning as paralleling SEM reasoning (which
>followed in your posting) does not help. Permit me another Garfinkel quote,
>this time from (1967, page3)
>         "In short, *recognizable* sense, or fact, or methodic character, or
>impersonality, or objectivity of accounts are not independent of the
>socially organized occasions of their use."

Of course. The important thing is to see how objectivity unfolds and
evolves within a social context. Merely establishing that objective
accounts have a social substrate does not invalidate them. I think
Gaston Bachelard, as elucidated by Mary Tiles _Science and Objectivity_
(1984) Cambridge University Press, shows how this occurs. For Bachelard
"...objective knowledge, as the limit and goal of all scientific equiry,
together with its co-ordinate conception 'reality', is a purely functional or
formal notion.  It has a constant role as the source of regulative
principles, of standards of evaluation, even though its positive
content is subject to revision. Progress in science is progress _towards_
objectivity" (Tiles, 1984, p. 44).  My own addition to this argument is
that objectivity is derived from a schema of perception. Object perception
involves simultaneously a determination of object and subject as distinct
from object. Object is that which is invariant across diverse appearances
of it, distinct from the effects of the observer's actions and states
producing changes in the perceptual field. (See Gibson). Object perception
involves reflection, awareness of one's own actions and the changes they
regularly produce in the perceptual field, so these may be factored out
of the perception of the object as invariant across the diverse perspectives
produced by the motions and acts of the observer. With perception much of
this occurs automatically below conscious awareness. So when we apply this
schema of subject-object consciously to the development of objective concepts
of the world, we are seeking to synthesize or integrate by the object concept,
diverse images reconstructed from memory.  Our conception of the world
is analogous to our perception of it. This is the metaphoric aspect of the
objectivity/subjectivity schema.

Like Gibson on the active, moving, manipulating perceiver that
continuously perceives the effects of its own acts in the perceptual
field, Bachelard regards the seeker of objective knowledge as an active,
self-reflective individual who intervenes in the world in the context of
a discursive dialogue within a community in which justified knowledge is
sought. According to Tiles, "Objective knowledge may thus be said to be
non-subjective in two senses: (1) it is such that the object of knowledge
(what is known, whether this be a particular thing, phenomenon or fact) is
distinct from the knowing subject at least to the extent that there is
room for there to be a cognitive gap between them; (2) it is independent
of the individual, noncognitive constitution of the subject, so that it
is possible knowledge of a depersonalized rational subject. Thus any
individual aiming to acquire such knowledge must both become aware of and
seek to eliminate possible sources of error, the ways in which or the
reasons why he might make mistakes in his cognitive judgements. At least
in his scientific life, he must aspire to conform as closely as possible
to the theoretical ideal of a purely rational subject.  It is by
combination of (1) and (2) therefore that we get the idea that objective
knowledge is the product of critical rationality." (Tiles, 1984, p. 49-50).
The "depersonalized rational subject" is a generic scientist, implying
that anyone who puts him/herself in the same place with the same reasoning
should be able to arrive at the same conclusions from the evidence.

Bachelard rejected grounding objective knowledge in perceptual passivity
(e.g. empiricists' sense impressions). The senses are source of subjectivity;
instruments yield more objective information to work from.  But in the
historical development of a scientific field, "...there is a continuous
interplay between intuitive, experiential (subjective) and rationally
discursive (objective) forms of knowledge. Pure objectivity is neither
given or ever fully achieved, but has to be worked for and worked towards.
The dialogue between experience and theory is accompanied by a dialogue
between subjective and objective modes of thought" (Tiles, 1984, p. 57).
The meaning of subjectivity and objectivity depends on the specific context
in which these concepts are applied, and they will change with reflective
thought that questions the status quo and the completeness of the account.

Reflective, self-critical thought will seek to incorporate the sociological
knowledge of the scientist's social and cultural mileu into the discursive
account of both what is subjective, distorting, and what is objective.
But it will also regard the account as open-ended, ever possible of
revision.

[snip]

>>Parenthetical note:  covariances might or might not be facts for
>>deciding the objective validity of some latent variable and its relationship
>>to variables.  For example, one may have to take into account other
>>moments of the distribution as well.  If relationships are nonlinear,
>>linear factor analysis may not apply.


>
>        I see the "as well" in this statement as contradicting the "might
>not be" in the first line. As well, says it remains....in addition to. I
>think you intended to say might not be sufficient.....

That's my intent.

.     but this leads back
>to identification as sufficient for objective validity.....which also seems
>problematic. .....

Mere "just-identification" is NOT sufficient for objectivity, for nothing
can be tested, and testing is essential to establishing something objective.
We need over-identification to achieve objective knowledge.

>>
>>[snip]

>>
>>Here Les seems to be using a different sense of "identification" from
>>the one statisticians have traditionally used with respect to estimated
>>parameters.  This is more the "identification" like "Let's see your
>>identification!"  "Who is this?"
>
>        No. I am intending a statistician's use of the term here. In an
>ordinary first order factor model, the factor variances and covariances are
>statistically identifiable coefficients that seem in some way to be directly
>attached to the factors as variables. In a second order factor-analysis,
>using these same first-order factors, the factors as variables do not have
>any estimable coefficients that stand directly as being  "the factors." Here
>all the estimable (statistically identifiable) things are only recomposed
>into the factors/variables via the action of the model equation for each of
>the factors. That is, the equation (a model component) is required to
>recombine the estimated/identified things to get to "the first order factor".
>

This is not clear, Les.  Why are structural coefficients between second-order
factors and first-order factors (say there are four first-order factors and
a hypothesized single second-order factor) not estimable?  They are estimable
if they are identified parameters.

[snip]

>>  Is his use of "fiction" here to indicate that we
>>are making up a story for the manifest indicators and their relationships
>>to hypothesized latent varibles?
>>
>       Yes and No. For me the second order factor is a fiction whether it
>has no, one, or twenty indicators, and whether the coefficients connecting
>it to the rest of the model are under-, just- or over-identified. The yes
>parts come from "a story". Yes, an equation is a story, and each endogenous
>concept, and indicator has an equation/story.
>
>WHERE I AM COMING FROM...
>        I suspect any netters following Stan and I will be inclinded to
>say.... choose your philosophy and choose your side of the debate. I would
>like to warn the so-inclined that my side of the philosophy was not selected
>by tossing a coin, or finding a phisosophy that was consistent with how I
>wanted to think about SEM. My side in the philosopy comes from my reading of
>social psychology.

And mine from reading a lot of philosophy and philosophy of science,
which may be seen as a critique of many of the presumptions of social
psychology, especially those in 1964....

>        Let us begin with perception --- Stan does refer to this, and this
>is foundational. I looked at perception (for non-SEM reasons) and found much
>structured knowledge available from psychology. The physiology of perception
>informs us richly about much of perception and indeed MIS-perception--which
>is even more important from the perspective of philosophy.

And very important for a reflective reason that seeks sources of error
and subjectivity in arriving at objective knowledge.


[snip]



>        So semnetters, please be careful in your choice of philosophy. Do
>not flip a coin. Kant did not have anywhere near the understanding of
>perception that is currently available.

Kant's theory of perception is not relevant here.  J. J. Gibson's is.
But what in Kant inspires people to call themselves contemporary Kantians
(like Hillary Putnam, George Lakoff, Mark Johnson, and even Gaston
Bachelard) is Kant's emphasis on how the knower contributes
categories, schemas, structuring to knowledge as that aspect of knowledge
not contributed by sensory information. Concepts like object-attribute,
causality, community of reciprocally interacting objects, were
regarded by Kant as fundamental relations by which humans structure and
organize their experience and not given in the sensory information itself.
Whereas Kant regarded his categories and schemas as fixed and complete,
contemporary Kantians can regard schemas and categories as partially modifiable
with experience and psychological development, while retaining some of their
original  characteristics. Other categories and schemas have also been
recognized as structuring experience. Modern social and cognitive psychology's
use of the schema concept is merely an unfolding of a Kantian idea, for example.
Chomskey's "deep grammar" is another, very complex one. The neurologists
"feature detectors" another. Lakoff and Johnson's "embodied schemas of
perception" become the basis for metaphors like PATH, CAUSE, UP-DOWN,
CONTAINER (IN-OUT), which are used to structure knowledge.

> And that new understanding makes us
>able to see things in ways quite different from what Kant saw. We CAN now
>see things in contexts and with connections that Kant couldn't.

You are missing the point here, I suspect, because you have not studied
Kant, for you would recognize how pervasive some of his ideas are among
contemporary (if not your own) thought.

Stan Mulaik

ATOM RSS1 RSS2