I have tried this but it returns true for both UTF-8 and ASCII:
console.log(/[^w/]+/.test("abc123")) //true
console.log(/[^w/]+/.test("ابت")) //true
I have tried this but it returns true for both UTF-8 and ASCII:
console.log(/[^w/]+/.test("abc123")) //true
console.log(/[^w/]+/.test("ابت")) //true
I think you meant /[^\w]+/ but what you really want, from what I can gather, is:
console.log(/^[\x00-\x7F]+$/.test("abc123")) //true
console.log(/^[\x00-\x7F]+$/.test("abc_-8+")) //true
console.log(/^[\x00-\x7F]+$/.test("ابت")) //false
If you didn't actually mean to check the full ASCII set, you can just use:
console.log(/^[\w]+$/.test("abc123")) //true
console.log(/^[\w]+$/.test("abc_-8+")) //false
console.log(/^[\w]+$/.test("ابت")) //false
About \x notation
\xFF is a hexadecimal notation (list here) used in this example for the range 00 to 7F to match the full ASCII character set. \x00-\x7F is functionally indentical to a-z in that it specifies a range, however we are using hex notation for reliable ranging
\w matches 'word' characters, which is the same as [a-z0-9_]