AI gives us great capabilities to learn and work faster. We’ve all experienced, or heard about, hundreds of success stories where AI tools not only sped up business processes or individual work output, but even solved previously “stuck” problems. Just like everything in life, AI is great when everything works as expected. But what happens if it doesn’t?
Let’s start with the basic question: Can AI be wrong? The short answer is yes, it can. Below is an example of a relatively simple question about dropping columns in AWS Redshift.
I’ve asked both ChatGPT and Bard "How can I grant permission to drop columns from a table?"
🤖 Bard replied:
To grant permissions to drop columns from a table in Redshift, you can use the following syntax:
GRANT DROP COLUMN ON TABLE table_name TO user_or_group;
For example, to grant the user "alice" the ability to drop columns from the table "customers", you would use the following command:
GRANT DROP COLUMN ON TABLE customers TO alice;
You can also grant the ability to drop columns from a table to a group of users by using the keyword "PUBLIC" instead of a specific user name. For example, the following command would grant all users the ability to drop columns from the table "customers":
GRANT DROP COLUMN ON TABLE customers TO PUBLIC;
Once you have granted the ability to drop columns from a table, the user or group will be able to use the DROP COLUMN statement to remove columns from the table. For example, the following command would remove the column "age" from the table "customers":
DROP COLUMN age FROM customers;
It is important to note that the user or group must also have the SELECT privilege on the table in order to use the DROP COLUMN statement. If the user or group does not have the SELECT privilege, they will not be able to see the data in the table and therefore will not be able to drop columns from it.
Here are some additional things to keep in mind when granting permissions to drop columns from a table in Redshift:
- The user or group must have the DROP privilege on the schema in which the table resides.
- The user or group must have the USAGE privilege on the table.
- The user or group must not have the INHERIT privilege on the table.
If you are not sure whether or not a user or group has the necessary permissions, you can use the SHOW GRANTS command to view their permissions.
🤖 ChatGPT replied:
To grant permissions to drop columns from a table in Amazon Redshift, you need to use the SQL GRANT statement to provide the necessary privileges to the user or role. However, it's important to note that Redshift does not have a direct command or privilege specifically for dropping columns. Instead, the DROP COLUMN operation is considered part of the ALTER TABLE statement, and the permission required to drop columns is the ALTER privilege on the table.
Here's an example of how you can grant the necessary permissions to a user or role in Redshift:
- Connect to your Redshift cluster using a client tool or command-line interface.
- Ensure you have the necessary privileges to grant permissions. You should be connected as a superuser or a user with the necessary administrative rights.
- Use the GRANT statement to assign the ALTER privilege on the specific table to the user or role:
GRANT ALTER ON TABLE table_name TO user_or_role;
Replace `table_name` with the name of the table to which you want to grant the permission.
Replace `user_or_role` with the name of the user or role to which you want to grant the permission.
For example, if you want to grant the ALTER privilege on a table called "my_table" to a user named "my_user," the command would be:
GRANT ALTER ON TABLE my_table TO my_user;
Keep in mind that granting the ALTER privilege allows the user or role to perform various table alterations, including dropping columns, so make sure to grant this privilege only to trusted users or roles.
Please note that the syntax and exact privileges available in Redshift may vary depending on the specific version and configuration of your cluster. It's always a good practice to consult the official Redshift documentation for the version you are using for detailed information on managing permissions.
(Now back to the human author.)
WOW! That’s amazing. It took less than a second for both AI tools to give me options on how to do it. But wait, why are they so different? Probably the Redshift data warehouse supports multiple ways to achieve the same thing. We’ve seen that before in multiple different technologies (for example, `DELETE FROM table;` and `TRUNCATE FROM table;`. Both will delete all the rows from the table). Let’s try it out.
I tried to grant permissions for another user to drop a column from one of my tables like Bard suggested and got: ERROR: syntax error at or near "COLUMN"
I tried to grant ALTER permissions on my table to another user like ChatGPT suggested and got: ERROR: Invalid privilege type ALTER for table.
Not only that both were wrong, both actually suggested queries that are simply invalid in Redshift. These are the different permissions available to set on tables in Redshift: SELECT | INSERT | UPDATE | DELETE | DROP | REFERENCES
As you can see, there’s no option to grant ALTER or DROP COLUMN permissions.
On top of that, even the DROP query that Bard suggested is invalid.
As you can see, not only that AI can be wrong, it looks like it just invented syntax that does not exist. In the example above there’s no real harm in that because the query just fails. The real problem starts when the answers we get from AI make sense and/or are valid but still wrong. For example, querying the wrong table or column.
We all make mistakes. But we’re human, and therefore it’s understandable that we will. That’s why we double-check everything we do to verify that we’re not wrong. But when it comes to computers, there’s always the expectation that what you get back is correct. From a simple calculation on a calculator to an app that can identify a song, if something is not correct, we’ll say that there’s a bug and it will be fixed in the next version. But we treat AI assistants slightly differently. There are many cases in which they are considered “part of the team” - and probably the smartest member of the team. This might cause people to believe and trust everything the AI gives them.
Am I telling you to not use AI at all? No. It’s a very powerful and impactful technology. But you should always double-check that the results are correct. Sadly, if the results are incorrect or misleading, you’ll have to go the old-fashioned way and actually do the manual work to get things done.
In other words, I suggest using AI as an extra tool in your tool belt and not as a black box replacement for everything you do.
Two bonuses:
- We asked AI to solve our Data Solutions Engineering test. It must have embellished its resume because unfortunately, we wouldn’t have hired it to our team.
- Here's an unaltered blog ChatGPT wrote for us about AI adoption and why we shouldn’t rush into it:
🤖 The Road Less Traveled: Why We Shouldn't Rush into Widespread AI Adoption
Introduction:
Artificial Intelligence (AI) has emerged as one of the most transformative technologies of our time, promising revolutionary advancements across various industries. With its ability to analyze vast amounts of data, make predictions, and automate tasks, AI holds immense potential for improving efficiency and enhancing our lives. However, before we fully embrace the AI revolution, it is crucial to exercise caution and consider the potential risks and ethical implications associated with its widespread adoption.
Lack of Understanding and Transparency:
One of the primary concerns surrounding AI is the lack of transparency and explainability. As AI algorithms become increasingly complex, they often operate as black boxes, making it difficult to comprehend how they arrive at their conclusions. This lack of transparency raises ethical questions and challenges the notion of accountability. If an AI system makes a critical mistake or exhibits biased behavior, it becomes challenging to determine the root cause or assign responsibility, potentially leading to severe consequences.
Ethical Implications and Bias:
AI systems are only as good as the data they are trained on. If the training data contains biases or reflects societal prejudices, the AI model can perpetuate and amplify these biases. For example, facial recognition algorithms have been shown to exhibit racial and gender biases, leading to inaccurate and unfair results. Deploying AI without addressing such biases could exacerbate societal inequalities, reinforce discrimination, and erode trust in AI systems.
Job Displacement and Economic Impact:
While AI has the potential to automate repetitive and mundane tasks, this can lead to significant job displacement. Many industries and professions could be heavily impacted, resulting in unemployment and economic disruption. The rapid adoption of AI without careful planning and consideration for the workforce's future can lead to social unrest and exacerbate income inequality. It is crucial to explore strategies such as retraining and upskilling programs to ensure a smooth transition for affected workers.
Security and Privacy Concerns:
The proliferation of AI also raises significant security and privacy concerns. AI systems often rely on vast amounts of personal data, which, if mishandled, can compromise individuals' privacy. Additionally, AI-powered autonomous systems can become vulnerable to malicious attacks and manipulation, posing serious risks to critical infrastructure, public safety, and national security. Ensuring robust safeguards, stringent regulations, and responsible data governance must be priorities before widespread AI adoption.
Unintended Consequences and Unpredictability:
AI systems are designed to optimize specific objectives based on the data they are trained on. However, they may exhibit unintended consequences or behavior when faced with novel situations or scenarios not accounted for during training. These unintended consequences can have far-reaching effects and may be difficult to anticipate. It is crucial to thoroughly test and validate AI systems before deployment to minimize the potential for harm.
Conclusion:
While the potential benefits of AI are undeniable, the rush toward widespread adoption must be tempered with caution. We must address the lack of transparency, ethical implications, job displacement, security concerns, and the unpredictability of AI systems. By acknowledging these challenges and taking a thoughtful approach, we can develop responsible AI solutions that maximize the benefits while minimizing the risks. It is essential to prioritize research, regulatory frameworks, and public engagement to ensure that AI aligns with our values and serves humanity's best interests. Only then can we embark on a truly transformative AI journey that brings about positive change for all.
While AI can be wrong, your data shouldn't be. Book a demo to see how Panoply can help you get accurate business data in a snap.