Cyber-racism Symposium Report
This report summarises the key issues discussed by panellists and observers at the Cyber-racism Symposium. The opinions expressed are those of the participants and do not necessarily represent the position of the Human Rights and Equal Opportunity Commission.
The Symposium participants considered the effectiveness of existing regulation of racial vilification and proposed various suggestions for improvement. The participants also discussed the non-regulatory options available to address cyber-racism. 
This problem occurs amid a marked increase in attacks against people of Arabic-speaking background in Australia since 11 September 2001:
Fear and concern will cause people to act on negative feelings and there is frustration felt by many. The Arabic community feels there is little that can be done.
Research by the Human Rights and Equal Opportunity Commission demonstrates that racist material can be found in websites, computer games, emails, chat-rooms, discussion groups and music. Background information prepared for the Symposium provides examples of racist Internet material, including material that has been created by people within Australia.
The Human Rights and Equal Opportunity Commission administers the Racial Discrimination Act 1975 which makes racial hatred unlawful. A person from the group against which the offensive racist act or material is directed can make a complaint to the Commission. But it is not possible to apply the legislation to ISPs or individuals that are located in other countries.
International cooperation should be taken into consideration.
Many European countries, individually and/or through the Council of Europe, have made incitement to racial hatred and dissemination of racist materials criminal offences, including when they occur on the Internet.
Australia can rely on these standards in dealing with sites that are created or hosted in some European countries. It may be effective to notify the authorities of the country where racist material originates or is hosted. That country may be able to prosecute the case, possibly in cooperation with other member states of the Council of Europe who have criminalised racial hatred.
There also needs to be more interagency cooperation within Australia. The Australian Broadcasting Authority has international networks that could assist in dealing with cyber-racism if the material originates in some overseas countries. The police also have international networks, though the material would have to be of a very serious nature, perhaps relating to security issues.
Internet content regulation is a scheme that has gradually expanded beyond film and video and has now come to be applied to the Internet. The current classification code deals with sex, violence and instructions to commit crime, and does not deal with racism.
The Australian Broadcasting Authority (ABA) cannot investigate complaints about racist Internet content even though the ABA is the key Internet content regulator in Australia.
The ABA can refer Internet material to the classification Board of the Office of Film and Literature Classification (OFLC). The guidelines used by the OFLC to classify Internet material are the same as those used by the ABA. The guidelines were originally designed to regulate 'entertainment'.
The Internet contains more than 'entertainment' and the OFLC Board seeks to reflect community standards in its classification of content. Anti-vilification laws are a community standard. It seems desirable to have consistency in regulatory standards so that the ABA and OFLC can assess and deal with racist content in a way that consistent with Australian law.
The Internet content regulatory scheme gives the Australian Broadcasting Authority the power to order 'take down' notices to ISPs. The ABA also has links with international voluntary hotlines and other networks. HREOC does not have these powers or networks to deal with racist content. Could the classification guidelines be changed so that the OFLC and the ABA can deal with racist content within the existing Internet content framework?
Should there be a pool of skilled people to identify and evaluate racist content? There needs to be a body that advises the Australian Broadcasting Authority if material is contrary to anti-vilification legislation; an assessing body that isn't there to jail the perpetrator, but to make decisions on content. The OFLC plays this role in dealing with sex and violence, so could another body have these sorts of powers for racism? Could HREOC's powers be changed so it could play this 'assessing' role for Internet content?
The Australian Broadcasting Authority would need to rely on a specialist tribunal. ISPs would also want some confidence that there was a regulatory body that could provide advice and judgement.
Dissemination of racist materials on the Internet has been criminalized by the Council of Europe under the First Additional Protocol to the Cybercrime Convention. This approach makes available criminal enforcement mechanisms, including international co-operation on the basis of uniform criminal standards across various countries.
It would be much easier for Australia to use the international enforcement framework in Europe if Australian standards on racial vilification were consistent with those in Europe. This would also send a more uniform message about the unacceptability of racist content.
State criminal law in Australia that prohibits serious racial vilification does not seem to be effective as there have been no prosecutions under the legislation to date. Some legislation has only recently been enacted such as the Racial and Religious Tolerance Act in Victoria which specifically covers electronic communications.
Advantages of introducing federal criminal sanctions against racial vilification in Australia:
Disadvantages of introducing federal criminal sanctions against racial vilification:
Alternatively, the civil regime could include stronger penalties. There are regimes where bureaucracies have been established to 'police' with civil penalties such as the Office of the Employment Advocate.
The problem of prosecuting people outside Australia, and particularly in the United States, would also remain.
The Racial Discrimination Act places the onus on the victims of racism to combat the problem. A person lodging a complaint must be from the targeted group. Other Australians who may find the material offensive, but who are not from the racial group that is vilified, cannot act.
The reality is that most victims of racism do not have the resources to pursue cases through HREOC, and then through the courts, as happened in the Toben case. That case was possible because the complainant was supported by the community he represented, and had the unpaid assistance of a solicitor and barrister. Most victims of vilification suffer disadvantage and would not be able to find the resources to do this.
In legal prosecutions it can be difficult to trace the originator of material (including emails) or the owner of a site, even when they are located within Australia. How is action to be taken against anonymous sites or emails? How and by whom is the proper respondent to be located?
The Internet Industry Codes of Practice provide some scope to deal with racist content on the Internet as Australian ISPs can respond to the directions of a 'relevant authority' to remove Internet content. HREOC cannot make an assessment of the content of a site in the way that the Australian Broadcasting Authority can. HREOC can only investigate and attempt to conciliate complaints but has no enforcement powers. HREOC could not order a site to be taken down. The courts are a 'relevant authority' and could make an order for an ISP to remove offensive content.
ISPs may be considered a 'publisher' of the material and therefore liable for it. ISPs have a responsibility to make sure racist content is dealt with and to send a clear message to their customers that it is unacceptable. This is the expectation in Europe.
Industry can assist with investigations. There is a difficulty with pre-paid Internet accounts as there is no physical address. However, customer and caller details can be provided to law enforcement agencies to assist in identifying people involved in criminal activity on the Internet. There are initiatives towards caller-line identification (CLI) to assist police and other investigative bodies. This may permit better identification of the authors of vilificatory material.
ISPs are required by the Codes of Practice to provide customers information about adhering to Australian law. Providers also have obligations to advise customers on how to limit access to content that they may find unsuitable. There are online safety tools such as filters that can block racially offensive material and ISPs have to provide advice and at cost filtering products.
Email is currently not regulated by uniform legislation.
Racially vilificatory material can be distributed by unsolicited bulk email, or 'Spam'. Spam accounts about one quarter of all emails sent globally.
Internet chat rooms may contain racist content and this medium is very difficult to monitor. Would the Racial Discrimination Act apply to chat rooms or are they 'private' communications if they are password protected? The level of password protection is often very shallow and in such cases the Act should apply.
Internet service providers can do some monitoring, for example, by scanning room names. It would be very resource intensive and probably not possible to routinely identify racist words inside chat rooms or bulletin boards.
Monitoring can also be done by the public by bringing racist content to the attention of the ISP.
The non-regulatory responses to cyber-racism would seem to fall into a number of categories, including technical responses, end user education, increased agency cooperation and community action.
As with the improvement of regulatory systems, the aim of non-regulatory approaches needs to be determined: is it to protect individuals or families, protect society, stop sites, stop racism?
Can cyber-racism be eliminated by filters? Under the Broadcasting Services Act, ISPs must provide a filter free of charge or at cost and it is part of a family friendly policy. Consumers need to be aware of what filters provide and make their own evaluation. They are not 100% effective, but may be about 70-80%.
There are problems with filters as they can block out 'good' sites which promote anti-racism, as well as blocking racist sites. There are also problems with broadband in that text based information could be hard to recognize, as people could, for example, change letters. Smarter filtering could and should play a bigger part.
Organisations or individuals can give themselves a presence on the Internet as anti-racist advocates and educators.
There is a need for education in critical thinking about all media including the Internet mainly in years 11 and 12 of high school. 'NetAlert' is an independent advisory body set up by the Commonwealth Government. It has run a nationwide program through schools and community organizations and advertises on the television. It has a web site and provides information packages to the public and to ISPs. HREOC and NetAlert could work more closely to examine the opportunities for providing more anti-racism education.
The Australian Broadcasting Authority is looking at models for training Internet users in critical thinking. There are models in Europe, for example between the French and Belgian education departments. This education addresses a series of issues such as 'stranger danger' in chat rooms and how to assess the quality of information on a web site. These sorts of models could be used to educate about racism.
There could be a content rating scheme. The Platform for Internet Content Selection (PICS) is a mechanism that could be used to classify sites. Users can be alerted or prevented from accessing sites which violate their preferences.
The system of community 'black lists' could help. Individuals might wish to nominate the content that they consider racist and add sites to these lists. Individual computers can also be configured to screen out content that the user does not wish to see. It is a preference system, not a censorship system.