This post made me wonder. Is seen س the standard way to represent an unknown variable in Arabic? And in mathematics? It's funny because I know the X that is used in Western languages to represent an unknown variable -- first in mathematics and later elsewhere -- has its origins in Arabic! I was told that the Arabic mathematicians used to put شيء 'something' as a variable. When the Arabic knowledge was exported to Iberia, the sheen ش was replaced by <X> which was the grapheme for /š/ in the area in those days (still to be found sometimes in Portuguese and Catalan if I'm not mistaken). Should someone have made me guess how a variable would be represented in Arabic, I would've guessed with a sheen. But now I see elroy use a seen for it, and I wonder if that is a more common way maybe? Or maybe there is simply no common way? Thank you in advance!

Obviously, you know what I'm going to say. I've always seen س and ص used for x and y, respectively, and ع for a third variable if needed. Disclaimer: I was never schooled in Arabic, so I could be wrong about what's most commonly used. My impressions are based on what I have seen in Arabic textbooks used by my peers.

Hello, Indeed, we use seen س as a variable (in maths, physics and chemistry). X Y Z are rendered as س ع ص. (when you're doing chemistry or maths for instance). It is, at least, the way I was taught. When we want to refer to someone (as X is speaking) we use فلان flaan (from fulann) in Algeria.

Interestingly fulan comes from Greek. So I think the two cultures have influenced one another a lot. It is still not clear why the Arabs used shay? for X although its meaning suggests an unkown quantity. But that's etymology and it remains speculation after all. You can never be sure about time and place of the borrowing. Jamshid

The standard variable for X is indeed س. It may have come from شيء (first time I've seen this, but looks very plausible because it means "thing". Five "things" plus 2 equals ...). But why س and not ش? Well, this is to avoid the ambiguities that could result from using dots, especially in handwriting. For example, when mathematics is taught in Arabic, the variables 'a', 'b', and 'c' are represented by أ (but with a special loop going through the top of it), ب (but without the dot!), and جـ (again without the dot). This should give us a clue as to why س is used instead of ش even though it might have come from the word شيء.

What about Indian or Sanskrit. Like Sifr often mathematical notions come from India. Is there a link? Maybe somebody who is familar with Indian or Sanskrit can contribute.

Thank you all for your answers! Very interesting! The person who taught me that the X came from شيء is a professor in Arabistics (Arabology? -- how would you say that?!) of whom I'm sure she wouldn't have told it if she wasn't certain about it. Having read Wadi Hanifa's explanation, I'm pretty convinced the seen used now, originates from a sheen. I suppose the Y and Z (like أ ب ج) simply follow the alifba'ical order, skipping ط because it's too close to ص ?

When you need to write, say, (xy)^2=10, do you actually merge س and ص or are they written separately?

1. أ ب جـ are used for a b c. As stated above, the common sequence that corresponds to x y z is س ص ع. 2. The third letter of the Arabic alphabet is ت not ط. 3. جـ is used because in certain contexts (like this one) it is common to use another alphabetical order: أ ب ج د هـ و ز ح ط ي ك ل م ن س ع ف ص ق ر ش ت ث خ ذ ض ظ غ They are written separately, of course.

I have some doubts about that scenario. In the early days of European algebra, mathematical notations varied wildly. The Europeans themselves only settled on 'x, y, z...' as the default representations for unknowns after the Renaissance. The following excerpt is about algebra in the Renaissance. (I hope I'm not breaking any rule of the forum by quoting it.) Early in the fifteenth century [...] some of the abacists began to substitute abbreviations for unknowns. For example, in place of the standard words cosa (thing), censo (square), cubo (cube), and radice (root), some authors used the abbreviations c, ce, cu, and R. [...] This change was a slow one. New symbols gradually came into use in the fifteenth and sixteenth centuries, but modern algebraic symbolism was not fully formed until the mid-seventeenth century. Katz, Victor J., A History of Mathematics -- An Introduction, 2nd. edition, 1998, Addison Wesley; chapter 9, section 9.1.1, page 345. See also section 9.2.2.

I knew about point 2, but I was disregarding ت and ث because, dotless, they are identical to ب . Then the third one you could use again is جـ anyway. I didn't know about point 3, and thank you for telling because that explains things better. But let me rephrase what I wanted to find out: In Western notations, the choice for Y and Z as representatives for a second and third variable, simply follow from the order of the alphabet from X onwards -- at least, I have no reason to suspect otherwise, maybe their use is motivated in another way. In Arabic then, this would be the concerned part of the alphabet -- at least as I know it. س ش ص ض ط ظ ع Disregarding the letters that are only distinctive due to their dots, I would still expect that ط and not ع would function as the third variable. How come that's not the case? The alternative alphabet order doesn't seem to clarify this for me, as it would be inconsistent with the order used in mathematics (ع before ص), and there's a ف in between that could also have been used. So, basically: what is the motivation of the choice for ص and عin Arabic as equivalents of Y and Z in Western notations? (If that can be inferred at all, of course.) I'm afraid the only source I have for the etymology I gave, is the person who told it, which happened quite some time ago and in a very by the way context (she was teaching the spelling of hamza in which shay served as an example) -- suffice to say it may both have been not told very accurately at the time, and not remembered very accurately now. Still, the fact that the notations only got fixed in the Renaissance doesn't mean European mathematicians couldn't have based them on the (Arabic) writings dating way back. I don't know much about (historical) Iberian orthography, but even when <x> didn't represent /š/ anymore in the Renaissance, they still could have based themselves on earlier translations of the Arabic algebra. Admittedly, then we would have to suppose that those didn't simply translate shay (to cosa ), but left it at a transliteration, which is not that obvious. Maybe someone else has access to a good source that discusses the origin of X in Western mathematics.

After the Renaissance, Johannes. Modern European mathematics was still an infant in the Renaissance. It did, and they could. However, I think that it's an oversimplification to think of algebra as something which was smuggled from the Islamic world to the Christian world across the Pyrenees. There were all sorts of cultural exchanges between the two civilizations, in many different places. Remember that most of the best known Renaissance mathematicians were Italians. Indeed, because of the empires that the Venetians and the Genoese had founded in the Eastern Mediterranean, they had many contacts with the Islamic world. I found a link to the following page at Wikipedia. Prior to Descartes' influential work, many authors would use vowels for unknowns and consonants for constants. Also in the same page is this:

It's also interesting that "fulann" would be the unknown somebody (so-and-so, what's-his-name). In Spanish and Portuguese, the word for this is "fulano". Are these terms related or is it just a coincidence? Seeing that much of the Iberian peninsula was controled by Arabic speakers for many centuries, I wouldn't be surprised. I recently learned that the Portuguese term for the "@" (arroba) comes from the Arabic word (al-rub??) for a specific unit of measurement (also represented by the at-sign) in Brazil and Portugal prior to its common use in e-mail.

Oops, I misread. But thank you for showing I'm not the only one who does that sometimes. Not algebra; some letter. That's not that far-fetched, is it? Although I do agree that it is not at all clear how that x would have spread then. On this page that same Wikipedia says something else: ... not really surprising if we take the nature of this website into consideration, although it is still useful from time to time. Yes, and on that very same page we also find the origin according to the competing dictionary: Although, obviously: "Cajori says there is no evidence for this." This is what I found on Etymonline.com: There doesn't seem to be much certainty about the etymology of our x. But the quote you gave about Descartes did convince me that his use of x is a more probable origin, if only because of the timing (to which you directed). I assume the use could have spread quickly from that point on. Also, while sources seem to differ, the xei origin is explicitly rejected in some of them, which is never the case with the Descartes one. May be an indication. We can still wonder then why x, and not z, is used as the first unknown nowadays, although I suppose the alphabet could suffice as an explanation, how dull that may be.