Large language models (LLMs) have been proposed to support many healthcare tasks, including disease diagnostics and treatment personalization. While AI may be applied to assist or enhance the delivery of healthcare, there is also a risk of misuse. LLMs could be used to allocate resources via unfair, unjust, or inaccurate criteria. For example, a social credit system uses big data to assess "trustworthiness" in society, penalizing those who score poorly based on evaluation metrics defined only by a power structure (e.g., a corporate entity or governing body). Such a system may be amplified by powerful LLMs which can evaluate individuals based on multimodal data - financial transactions, internet activity, and other behavioral inputs. Healthcare data is perhaps the most sensitive information which can be collected and could potentially be used to violate civil liberty or other rights via a "clinical credit system", which may include limiting access to care. The results of this study show that LLMs may be biased in favor of collective or systemic benefit over protecting individual rights, potentially enabling this type of future misuse. Moreover, experiments in this report simulate how clinical datasets might be exploited with current LLMs, demonstrating the urgency of addressing these ethical dangers. Finally, strategies are proposed to mitigate the risk of developing large AI models for healthcare.